url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.physicsforums.com/threads/deflection-of-beams-problem.306872/
# Deflection of beams problem 1. Apr 12, 2009 ### Aerstz 1. The problem statement, all variables and given/known data Deflection of simply-supported beam problem. Please see the attached image of an example problem from a textbook: http://img14.imageshack.us/img14/619/hearnbeamproblem.png [Broken] I have absolutely no idea why A = - (wL^3)/24 and why 0 = (wL^4)/12 - (wL^4)/24 +AL is used in the determination of A. I especially do not know why (wL^4)/12 is used in the above equation. I would have thought that A would represent the left beam support, where I also would have thought that x = 0. But, according to the example in the attached image, x at A = L. Last edited by a moderator: May 4, 2017 2. Apr 12, 2009 ### Aerstz Another example of a deflection problem: http://img18.imageshack.us/img18/2868/beerbeamproblem.png [Broken] I am not sure why C1 = 1/2PL^2, but I have no idea why C2 = -1/3PL^3. My maths is very weak; I think I just need some kind soul to gently walk me through this! Last edited by a moderator: May 4, 2017 3. Apr 12, 2009 ### Aerstz And a third example: http://img133.imageshack.us/img133/9210/be3amproblemthree.png [Broken] I have no idea how 52.08 came to equal A. Last edited by a moderator: May 4, 2017 4. Apr 13, 2009 ### PhanthomJay These terms A, B, C1 in the problems above refer to the constants of integration of the differential equation, as determined from the boundary conditions, and do not in any way refer to the support reactions. Boundary conditions are established at the ends of the beams based on the support condition. For example, if there is no deflection at a left end support, then the vertical deflection, y, equals 0 , when x, the horizontal distance from the left end, is 0. The values of the constants of integration are derived by carefully following the given steps in the examples. 5. Apr 13, 2009 ### Aerstz That's the problem; I am unable to follow the steps in the examples. The steps are too big; I need smaller steps to bridge the gaps. To me, the examples seem to go from A straight to Z in one giant leap. I need to know B,C,D...etc., in between. Currently I am completely blind to what these intermediate steps are. For example, and as I asked above in the first post: Why does A = - (wL^3)/24? What I mean to ask is, how was the (wL^3)/24 arrived at? I am extremely challenged with this 'simple' mathematics and I really need a kind soul to guide me through it very gently and slowly! 6. Apr 13, 2009 ### PhanthomJay I hear you. Looking at part of the first problem, step by step, inch by inch: 1. $$EI(y) = wLx^3/12 -wx^4/24 + Ax + B$$ Now since at the left end, at x = 0, we know there is no deflection at that point; thus, y = 0 when x =0, so substitute these zero values into Step 1 to obtain 2. $$0 = 0 - 0 + 0 + B$$, which yields 3. $$B = 0$$, thus Eq. 1 becomes 4. $$EI(y) = wLx^3/12 - wx^4/24 + Ax$$ Now since at the right end, at x = L, we also know that y = 0 , substitute X=L and y=0 into Eq. 4 to yield 5. $$0 = wL(L^3)/12 - wL^4/24 + AL$$ or 6. $$0 = w(L^4)/12 - wL^4/24 + AL$$ . Now since the first term in Eq. 6 above, $$wL^4/12$$, can be rewritten as $$2wL^4/24$$, then 7. $$0 = (2wL^4/24 - wL^4/24) +AL$$, or 8. $$0 = wL^4/24 + AL$$. Now divide both sides of the equation by L, and thus 9. $$0 = wL^3/24 + A = 0$$, and now solve for A by subtracting $$(wL^3/24)$$ from both sides of the equation to get 10. $$0 -wL^3/24 = (wL^3/24 -wL^3/24) + A$$, or 11. $$-wL^3/24 = 0 + A$$ 12. $$A = -wL^3/24$$ 7. Apr 13, 2009 ### Aerstz Thank you very much, Jay. I appreciate you taking the time to lay the process out as you did! It is much clearer now, so hopefully I should be able to get past this quagmire and actually progress with some work! Similar Discussions: Deflection of beams problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936626315116882, "perplexity": 944.7417032061331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108264.79/warc/CC-MAIN-20170821095257-20170821115257-00043.warc.gz"}
https://puzzling.stackexchange.com/questions/25039/10-9-8-7-6-5-4-3-2-1-2016
# 10 9 8 7 6 5 4 3 2 1 = 2016 Add the four basic operators $\times\div+\,\;-$ and optionally brackets to: $10 \quad 9 \quad 8 \quad 7 \quad 6 \quad 5 \quad 4 \quad 3 \quad 2 \quad 1$ To get the total $2016$. Rules: • We are looking for the simplest solution - i.e. the least amount of characters (ignoring spaces). Please include your character count in your answer. • Keep the order; do not add or combine numbers. • Use all four operators at least once. Credit for initial concept: Alex Bellos • Is it one of each operator? And can you combine numbers (eg. 2 and 1 makes 21)? – JonTheMon Jan 4 '16 at 17:34 • If there are multiple ways of doing this, do you want the most complex, the most simple, or some other criteria? – Aggie Kidd Jan 4 '16 at 17:38 • Do the numbers need to be in that order in the equation? – JonTheMon Jan 4 '16 at 17:58 • Fun fact, if Carat was allowed: 10 x 9 + 8 + 7 * 6 + 5 ^ 4 x 3 + 2 -1 (credit: @TheDanWoods Twitter) – rybo111 Jan 5 '16 at 0:20 • @rybo111 there are actually 2 ways with a single carat: 10 x 9 + 8 + 7 x 6 + 5 ^ 4 x 3 + 2 - 1 10 x 9 + 8 x 7 - 6 + 5 ^ 4 x 3 + 2 - 1 edit: oops 2 not 5 - foiled by integer division! – ejrb Jan 5 '16 at 15:40 22 characters $10 \times 9 \times 8 \times 7 \times 6 \div 5 \div (4 - 3 + 2) \times 1$ I looked at @Will's answer and found a way to improve on it. • @rybo111 - Brute force turns up no solutions without groupings using only +, -, *, and /. Closest two are: 10*9*8*7/6/5*4*3-2+1 = 2015.0 and 10*9*8*7/6/5*4*3+2-1 = 2017.0. This should be accepted. :) – Will Jan 4 '16 at 19:31 • @Will keep that second one saved for next year! – Joel Rondeau Jan 4 '16 at 19:32 • Couldn't this save one character? Seems to me the final operator is unnecessary. – Jordan Jan 4 '16 at 19:43 • @rybo111 brackets are an implicit multiplication. The answer is exactly the same if the final * is removed, but with one less character. – Jordan Jan 4 '16 at 19:54 • @rybo111 There is plenty of different mathematical notation that is accepted by mathematical professionals that doesn't work on Google, for various reasons (ranging from the fact that it can't be written as plain text, to that Google simply doesn't implement it). – hexafraction Jan 5 '16 at 0:39 22 characters I don't think you can beat Joel's answer at 22 characters, but there are some nice ways to tie it (including a variation of Joel's for completeness): $(10 - 9 + 8 \times 7 \times 6 - 5 + 4) \times 3 \times 2 \times 1$ $10 - 9 + 8 \times 7 \times (6 \times 5 + 4 \times 3 \div 2) - 1$ $10 - 9 + 8 \times 7 \times (6 + 5 \times 4 \times 3 \div 2) - 1$ $10 - 9 + 8 \times 7 \times 6 \div (5 - 4) \times 3 \times 2 - 1$ $10 - 9 + 8 \times 7 \times 6 \times (5 + 4) \div 3 \times 2 - 1$ $10 - 9 + 8 \times 7 \times 6 \times (5 + 4 + 3) \div 2 - 1$ $10 + (9 \times 8 \times 7 - 6 + 5) \times 4 - 3 \times 2 \div 1$ $10 \times 9 \times 8 \times 7 \div (6 - 5 + 4 - 3 \div 2 - 1)$ $10 \times 9 \times 8 \times 7 \div (6 \times 5 \div 4 - 3 \times 2 + 1)$ $10 \times 9 \times 8 \times 7 \times 6 \div (5 \times 4 - 3 \times 2 + 1)$ $10 \times 9 \times 8 \times 7 \times 6 \div (5 + 4 \times 3 - 2 \times 1)$ $10 \times 9 \times 8 \times 7 \times 6 \div 5 \div (4 - 3 + 2) \times 1$ There are other ways but many of them are trivial (change $\div 1$ to $\times 1$ or vice-versa, or $\div 1)$ to $)\div 1$ If we didn't have the restriction that we need to use all the different operators there is also a very nice solution: $10 \times 9 \times 8 \times 7 \times 6 \div (5 + 4 + 3 + 2 + 1)$ • The non-restriction version is so clean! – rybo111 Jan 5 '16 at 0:09 • @rybo111 Yeah, I like how the / is right in the middle. – Paulpro Jan 5 '16 at 1:04 • You found so many ways. Is there any generalized method to come with these? or just random guesses? – Mahesha999 Jan 5 '16 at 13:42 • 22 characters is indeed the best possible solution as it cannot be done without parentheses (verified by exhaustive search) – ejrb Jan 5 '16 at 16:22 • I don't understand why nobody is taking the hints in multiple comments. You can take many of these to 21 characters by simply removing any multiplication operators next to a parentheses. – Jordan Jan 6 '16 at 16:11 I came up with: $(10 - 9) \times 8 \times 7 \times 6 \times (5 - 4 + 3 + 2 \div 1)$ This is 9 operators and 2 required groupings, for a total of 24 characters. • Updated question – rybo111 Jan 4 '16 at 18:17 • All right, editing in division real quick – Will Jan 4 '16 at 18:18 • Great stuff! Can anyone beat 24 characters? – rybo111 Jan 4 '16 at 18:23 • I was just writing an answer with your edit. – tfitzger Jan 4 '16 at 18:45 • You could also remove the two asterisks right next to the parentheses i.e. 6(5-4) = 6*(5-4) – Marco Bonelli Jan 4 '16 at 19:27 $10 + 9 \times 8 - 7 + 654 \times 3 - 21$ 17 Characters... Is mushing numbers together allowed? Also, yay Mathematica. • Oh, wait, it isn't allowed. – Shane Di Dona Jan 5 '16 at 0:32 • Still another cool solution though! – rybo111 Jan 5 '16 at 0:36 • Also missing $÷$ – Jamie Barker Jan 6 '16 at 8:06 • I like your solution and your use of the word mushing! How do you find something like this using Mathematica? Is there a special command you used? – CJD Jan 6 '16 at 13:11 If we allow for implicit multiplication of parenthesized expressions then the following solutions, all of length 20, become possible $10\times 9\times 8\times 7(6\div 5-4+3) 2\times 1$ $10\times 9\times 8\times 7 (6\div 5-4+3) 2\div 1$ $10\times 9\times 8 (7-6\div 5-4+3-2) 1$ This list has been generated via exhaustive search, and excludes needlessly parenthesizing expressions that only contain multiplication or division. • Bravo! 21 was the previous leader with implicit multiplication – rybo111 Jan 6 '16 at 22:35 21 chars $10 \times 9 \times 8 (7 - 6 \div 5 - 4 + 3 - 2 \times 1)$ 22 Chars: 10 x 9 x 8 x 7 x 6/(5 + 4 + 3 + 2 + 1) • @PaulPro already posted this solution – rybo111 Jan 5 '16 at 8:51 22 chars ±10*9*8*7*6/(5+4+3+2+1) If ± is valid as usage of - character, then you get two answers, which of one is correct :) Here is one more with 22 characters not mentioned in @PaulPro's answer: 10*9*8*7*6/(5*4-3-2*1) Edit: As @DanHenderson pointed out, this has no + operator. • Has no + operator. – Dan Henderson Jan 5 '16 at 14:45 24 chars: $(10-9+8) \times 7 \times (6 \times 5 + 4-3 + 2-1)$ $9 \times 7 \times 32 = 2016$ 22 20 characters 10 + 9 * 8 * 7 * 4 - 6 / 3 - 5 - 2 - 1 Without order :( • I count 20 characters. – Crispy Jan 4 '16 at 20:11 • You have changed the order of the numbers so this answer is invalid but otherwise well done – RedLaser Jan 4 '16 at 21:18 21! 10*9*8*7*6/(5+4+3+2+1) But not valid, I guess... • This was already included in @PaulPro’s answer, and a couple of others’. – Peter LeFanu Lumsdaine Jan 5 '16 at 13:53 27 characters (missing /): (10-9)(8*7)(6-5)(4)(3)(2+1) and 22 characters (missing -): 10*9*8*7*6/(5+4+3+2+1) • Missing minus in last suggestion, and missing / in first – Viktor Mellgren Jan 5 '16 at 10:33 ## protected by DoorknobJan 5 '16 at 12:33 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39838701486587524, "perplexity": 1495.873741076977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00327.warc.gz"}
https://tabula.archaeo.science/
## Overview An easy way to examine archaeological count data. This package provides a convenient and reproducible toolkit for relative and absolute dating and analysis of (chronological) patterns. It includes functions for matrix seriation (reciprocal ranking, CA-based seriation), chronological modeling and dating of archaeological assemblages and/or objects. Beyond these, the package provides several tests and measures of diversity: heterogeneity and evenness (Brillouin, Shannon, Simpson, etc.), richness and rarefaction (Chao1, Chao2, ACE, ICE, etc.), turnover and similarity (Brainerd-Robinson, etc.). The package make it easy to visualize count data and statistical thresholds: rank vs. abundance plots, heatmaps, Ford (1962) and Bertin (1977) diagrams. To cite tabula in publications please use: Frerebeau, N. (2019). tabula: An R Package for Analysis, Seriation, and Visualization of Archaeological Count Data. Journal of Open Source Software, 4(44), 1821. DOI 10.21105/joss.01821. ## Installation You can install the released version of tabula from CRAN with: install.packages("tabula") Or install the development version from GitHub with: # install.packages("devtools") remotes::install_github("nfrerebeau/tabula") ## Usage # Load packages library(tabula) library(khroma) library(ggplot2) library(magrittr) tabula uses a set of S4 classes that extend the basic matrix data type. These new classes represent different special types of matrix: • Numeric matrix: • CountMatrix represents absolute frequency data, • AbundanceMatrix represents relative frequency data, • OccurrenceMatrix represents a co-occurrence matrix, • SimilarityMatrix represents a (dis)similarity matrix, • Logical matrix: • IncidenceMatrix represents presence/absence data, • StratigraphicMatrix represents stratigraphic relationships. It assumes that you keep your data tidy: each variable (type/taxa) must be saved in its own column and each observation (sample/case) must be saved in its own row. These new classes are of simple use, please refer to the documentation of the codex package where these classes are defined. ### Visualization Several types of graphs are available in tabula which uses ggplot2 for plotting informations. This makes it easy to customize diagrams (e.g. using themes and scales). Spot matrix[1] allows direct examination of data: # Plot co-occurrence of types # (i.e. how many times (percent) each pairs of taxa occur together # in at least one sample.) mississippi %>% as_occurrence() %>% plot_spot() + ggplot2::labs(size = "", colour = "Co-occurrence") + ggplot2::theme(legend.box = "horizontal") + khroma::scale_colour_YlOrBr() Bertin or Ford (battleship curve) diagrams can be plotted, with statistic threshold (including B. Desachy’s sériographe). mississippi %>% as_count() %>% plot_bertin(threshold = mean) + khroma::scale_fill_vibrant() compiegne %>% as_count() %>% plot_ford() ### Seriation # Build an incidence matrix with random data set.seed(12345) incidence <- IncidenceMatrix(data = sample(0:1, 400, TRUE, c(0.6, 0.4)), nrow = 20) # Get seriation order on rows and columns # Correspondance analysis-based seriation (indices <- seriate_reciprocal(incidence, margin = c(1, 2))) #> <PermutationOrder: 4bffe51c-75f2-4bc3-a2d0-7c005a75b349> #> Permutation order for matrix seriation: #> - Row order: 1 4 20 3 9 16 19 10 13 2 11 7 17 5 6 18 14 15 8 12... #> - Column order: 1 16 9 4 8 14 3 20 13 2 6 18 7 17 5 11 19 12 15 10... #> - Method: reciprocal # Permute matrix rows and columns incidence2 <- permute(incidence, indices) # Plot matrix plot_heatmap(incidence) + ggplot2::labs(title = "Original matrix") + ggplot2::scale_fill_manual(values = c("TRUE" = "black", "FALSE" = "white")) plot_heatmap(incidence2) + ggplot2::labs(title = "Rearranged matrix") + ggplot2::scale_fill_manual(values = c("TRUE" = "black", "FALSE" = "white")) ### Dating This package provides an implementation of the chronological modeling method developed by Bellanger and Husi (2012). This method is slightly modified here and allows the construction of different probability density curves of archaeological assemblage dates (event, activity and tempo). Note that this implementation is experimental (see help(date_event)). # Coerce dataset to abundance (count) matrix zuni_counts <- as_count(zuni) # Assume that some assemblages are reliably dated (this is NOT a real example) # The names of the vector entries must match the names of the assemblages set_dates(zuni_counts) <- c( LZ0569 = 1097, LZ0279 = 1119, CS16 = 1328, LZ0066 = 1111, LZ0852 = 1216, LZ1209 = 1251, CS144 = 1262, LZ0563 = 1206, LZ0329 = 1076, LZ0005Q = 859, LZ0322 = 1109, LZ0067 = 863, LZ0578 = 1180, LZ0227 = 1104, LZ0610 = 1074 ) # Model the event date for each assemblage model <- date_event(zuni_counts, cutoff = 90) # Plot activity and tempo distributions plot_date(model, type = "activity", select = "LZ1105") + ggplot2::labs(title = "Activity plot") + ggplot2::theme_bw() plot_date(model, type = "tempo", select = "LZ1105") + ggplot2::labs(title = "Tempo plot") + ggplot2::theme_bw() ### Analysis Diversity can be measured according to several indices (sometimes referred to as indices of heterogeneity): mississippi %>% as_count() %>% index_heterogeneity(method = "shannon") #> <HeterogeneityIndex: 05b1f084-1042-4314-b597-72a5b9bb6d79> #> - Method: shannon #> size index #> 10-P-1 153 1.2027955 #> 11-N-9 758 0.7646565 #> 11-N-1 1303 0.9293974 #> 11-O-10 638 0.8228576 #> 11-N-4 1266 0.7901428 #> 13-N-5 79 0.9998430 #> 13-N-4 241 1.2051989 #> 13-N-16 171 1.1776226 #> 13-O-11 128 1.1533432 #> 13-O-10 226 1.2884172 #> 13-P-1 360 1.1725355 #> 13-P-8 192 1.5296294 #> 13-P-10 91 1.7952443 #> 13-O-7 1233 1.1627477 #> 13-O-5 1709 1.0718463 #> 13-N-21 614 0.9205717 #> 12-O-5 424 1.1751002 #> Holden Lake 360 0.7307620 #> 13-N-15 1300 1.1270126 #> 12-N-3 983 1.0270291 ## Test difference in Shannon diversity between assemblages ## (returns a matrix of adjusted p values) mississippi[1:5, ] %>% as_count() %>% test_diversity() #> 10-P-1 11-N-9 11-N-1 11-O-10 #> 11-N-9 0.000000e+00 NA NA NA #> 11-N-1 3.609626e-08 8.538298e-05 NA NA #> 11-O-10 2.415845e-13 4.735511e-01 2.860461e-02 NA #> 11-N-4 0.000000e+00 7.116363e-01 7.961107e-05 0.7116363 Note that berger, mcintosh and simpson methods return a dominance index, not the reciprocal form usually adopted, so that an increase in the value of the index accompanies a decrease in diversity. Corresponding evenness (i.e. a measure of how evenly individuals are distributed across the sample) can also be computed, as well as richness and rarefaction. Several methods can be used to ascertain the degree of turnover in taxa composition along a gradient on qualitative (presence/absence) data. It assumes that the order of the matrix rows (from 1 to n) follows the progression along the gradient/transect. Diversity can also be measured by addressing similarity between pairs of sites: ## Calculate the Brainerd-Robinson index ## Plot the similarity matrix mississippi %>% as_count() %>% similarity(method = "brainerd") %>% plot_spot() + ggplot2::labs(size = "Similarity", colour = "Similarity") + khroma::scale_colour_iridescent() The Frequency Increment Test can be used to assess the detection and quantification of selective processes in the archaeological record[2]. ## Keep only decoration types that have a maximum frequency of at least 50 keep <- apply(X = merzbach, MARGIN = 2, FUN = function(x) max(x) >= 50) merzbach_count <- as_count(merzbach[, keep]) ## The data are grouped by phase ## We use the row names as time coordinates (roman numerals) set_dates(merzbach_count) <- rownames(merzbach) ## Plot time vs abundance and highlight selection plot_time(merzbach_count, highlight = "FIT", roll = TRUE) + ggplot2::theme_bw() + khroma::scale_color_contrast() ## Contributing Please note that the tabula project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms. 1. Adapted from Dan Gopstein’s original idea. 2. Adapted from Ben Marwick’s original idea.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3109622299671173, "perplexity": 17040.180031474916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687725.76/warc/CC-MAIN-20200126043644-20200126073644-00221.warc.gz"}
https://www.pollingindicator.com/p/method.html
### Method The Irish Polling Indicator combines all national election polls to one estimate of political support for each party. The creator is Tom Louwerse, Assistant Professor in Political Science, Leiden University (the Netherlands). Work on the Irish Poling Indicator started when he was working at the Department of Political Science at Trinity College Dublin. The approach used by the Irish Polling Indicator is described in detail in this artile published in Irish Political Studies (Open Access). The polls used are published national surveys by Behaviour & Attitudes, Ipsos MRBI, Millward Brown, Red C Research. Basic idea The basic idea of the Irish Polling Indicator is to take all available polling information together to arrive at the best estimate of current support for parties. Polls are great tools for measuring public opinion, but because only a limited sample is surveyed, we need to take into account sampling error. By combining multiple polls, we can reduce this error. Moreover, with so many polls going around it is difficult to get a random sample of voters to participate in any one public opinion survey. And those that do participate might not have a clear idea who to vote for, something that is often adjusted for in polls. This may lead to structural differences between the results of different polling companies, so-called house effects. But how do you average two polls if one is conducted today, another one week ago and yet another one 3 weeks old? Just take the average of the three? Weight the more recent ones more heavily perhaps, but by how much exactly? The Polling Indicator assumes that public opinion changes every day, but only by so much. If Labour was on 10% last week and turns out to poll 18% today, we might question whether one of these polls (or even both) are outliers, which just by chance contains many more or less Labour voters than there are in the general public. The Polling Indicator assumes that support for a party can go up or down, but that radical changes are quite rare. But if one party is generally more volatile, it will take this into account. Minor parties and independents The Irish Polling Indicator contains a rather large category of 'Others/independents', which lumps together minor parties and independent candidates. It is somewhat complicated to break this down in the context of the Irish Polling Indicator's model, as these groups have not been consistently reported in polls over the entire parliamentary term. As a way around this, support for these groups is analysed on a group-by-group basis. Basically, these analyses take into account all of the things the main model also looks at, but it does not guarantee that the support for these parties adds up exactly to the total for 'Others/independents'. In practise this is not too important. The breakdown includes three parties (AAA-PBP, Renua and Social Democrats) and Independent candidates/Independent Alliance, so there will be other even smaller parties that are not included in the breakdown. Therefore the total of Other/Independents is likely to be higher than the sum of the four groups in the breakdown. Model This part is a little tricky and you probably need some statistical training to fully grasp it. The Irish Polling Indicator is based on a Bayesian statistical model, based on the work of several political scientists. It provides an estimate for each party's support on each day d. The percentage that this party gets in poll i is called $$P_i$$ - this is something we know. What we want to know is what this party's support among the whole electorate ($$A_d$$) is on each day.  So how do we estimate this? First, we know what happens if we draw many random samples from a population. So if we would have a population with 20% support for Fine Gael, and draw a lot of random samples of size 1,000 from this population, most of these samples would yield a percentage for Fine Gael that would be pretty close to 20%. But some would be further away. In fact, we know that the values that we possibly might obtain in all of these samples follows a normal distribution with a mean of $$A_d$$ and a standard deviation of $$\sqrt{\frac{A_d (1-A_d)}{N}}$$. Here N stands for the sample size, 1000 in our example. Since we do not know $$A_d$$ we approximate the standard deviation by using $$P_i$$ instead, so the first part of the model would look like this: \begin{aligned} P_i & \sim \mathcal{N}(A_d, \sqrt{\frac{P_i (1-P_i)}{N}} ) \\ \end{aligned} The percentage that we find in the poll comes from a normal distribution with a mean of the real party support on the day the poll was held ($$A_d$$) and a standard deviation which mainly depends on the sample size ($$N$$). We don't know $$A_d$$, but are going to estimate it. The actual model is somewhat more complicated because it takes into account two other things. First, the standard deviation in the formula above (also called the standard error in this case), is only known through the simple formula above if we have a random sample. Real-world polls usually have a more complicated strategy to select a sample, which may increase the standard error. By weighting their respondents (i.e. if you have 75% men in the survey, you might want to weight that down to 50%) error might be reduced. Therefore we allow the standard deviation ($$F_i$$) to be a factor $$D$$ smaller or larger than we would have with a simple random sample. Secondly, there might be structural differences between pollsters which cause a certain polling company to overestimate or underestimate a certain party. So, they sample from a distribution with mean $$M_d$$, which is in fact a combination of the real percentage $$A_d$$ plus their house effect $$H_{b_i}$$. If their house effect is 0, they are polling from the 'correct' distribution and we only have to deal with sampling error. If their house effect is large, they might structurally underestimate or overestimate a party. This yields the following model (for each party): \begin{aligned} (1)~~ P_i & \sim \mathcal{N}(M_d, F_iD) \\ (2)~~ M_d & = A_d + H_{b_i} \end{aligned} The next part of the model relates a party's percentage today ($$A_d$$) to its percentage yesterday ($$A_{d-1}$$). As explained above, we expect that day-to-day change in support is limited. To ensure that party support sums to 100%, these day-to-day changes will be modelled in terms of the log-ratio of support (where the first party will be fixed at a log-ratio of 0). For each day, the support is allow to change somewhat up or down: \begin{aligned} (3)~~ LA_{d} & \sim\mathcal{\mathcal{N}}(LA_{d-1},\tau_{p}) \end{aligned} We can calculate the vote share for each party based on these log-ratios as follows: \begin{aligned} (4)~~ A_{d} & =\frac{exp(LA_{d})}{\sum exp(LA_{i})} \end{aligned} Priors For the statistical nerds: The Bayesian of the model has the following priors: \begin{aligned} (5)~~ \tau_{p} & \sim Uniform(0,0.2) \\ (6)~~ H_{b} & \sim Uniform(-0.2,0.2) \\ (7)~~ D & \sim Uniform(\sqrt{\frac{1}{3}},\sqrt{3}) \\ \end{aligned} The house effects $$H_{b_i}$$ are constrained to sum to zero over the companies $$b$$ to allow for model identification. The model is estimated in JAGS 3.4. It is usually run with 6 chains, with 30,000 burn-in iterations and 60,000 iterations (150 thinning interval), leaving 2,400 MCMC draws from the posterior distribution. Although the model is slow-mixing, this seems to be adequate and a good balance between speed and accuracy. Sources Fisher, S. D., Ford, R., Jennings, W., Pickup, M., & Wlezien, C. (2011). From polls to votes to seats: Forecasting the 2010 British general election. Electoral Studies, 30(2), 250-257. Jackman, S. (2005). Pooling the polls over an election campaign. Australian Journal of Political Science, 40(4), 499-517. Pickup, M. A., & Wlezien, C. (2009). On filtering longitudinal public opinion data: Issues in identification and representation of true change. Electoral Studies, 28(3), 354-367. Pickup, M., & Johnston, R. (2008). Campaign trial heats as election forecasts: Measurement error and bias in 2004 presidential campaign polls. International Journal of Forecasting, 24(2), 272-284. Pickup M. (2011). 'Methodology' http://pollob.politics.ox.ac.uk/documents/methodology.pdf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8667016625404358, "perplexity": 1299.336840728116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512015.74/warc/CC-MAIN-20181018214747-20181019000247-00064.warc.gz"}
https://wiki.documentfoundation.org/TestLink/Admin_Guide
This page was marked as inactive and is retained for historical reference. We no longer do manual testing. All the test cases were moved to automated tests. As an Admin, you will have access to all needed options to create new users, test cases, test plans, platforms, specifications and the like. You also should be able to add an issue tracker like Bugzilla, Redmine, etc. On the top left of the screen, near your login name, there is a My Settings icon that lets you manage your login, e-mail address and locale. By changing the locale, you will change the user interface language. The workflow we are using will let you access the tests in the locale chosen here. This is also the place where you can manage your password and reset it. Under the password area, a button lets you generate an API key (it could be used to connect to other applications). And under this area, an history of your connection is listed. Next to your login name, the role that has been attributed to you, appears between square brackets. ### Users/Roles Below your logout button (right of your "My Settings" one) you will find a button with a head. Hovering your mouse pointer over it will show the quick help Users/Roles. Here you can create new users, change them, or export them. The same goes for View Roles. You can also update Assign Test Project roles and Assign Test Plan roles. #### Creating a new user Say, you want to create a new tester for a test. You need to 1. click on the Users/Roles button 2. click on Create 4. assign the Role to him/her (in this case Tester) 5. set the Locale to his/her language 6. click on Save Now you have created your first tester for TestLink. You are also able to create other users for all needed roles here like "Test Designer" , "Senior Tester" or "Leader". You will find a short overview about each role on the [User Guide]. Now you want to change a user's role (maybe (s)he did constantly a good job and you want to reward him/her with a better position). You need to 1. click on the Users/Roles button 2. click on the user's login name 3. choose a different role there (e.g. Senior Tester or Test Designer) 4. click on Save. Your tester is now a Senior Tester (or Test Designer if you have chosen this role). #### Disabling a user It seems not possible to delete a user (or better: the author -- tha -- of this text) has not found an easy way to do it via web interface nor with MySQL/MariaDB which comes with Bitnami's TestLink VM) via TestLink's web interface, but you can disable it in the Users/Roles interface. To do this 1. click on the Users/Roles icon 2. click on the red circle with the white cross of the user name (Disable User) ### Creating a Test Project To get your tester something to test, you will first need to create a test project. To do this 2. you will see the header Test Project and below it Test Project Management 4. you will see a button labelled Create on the right 5. after clicking this button you will see a screen where you are able to inherit test cases and other stuff from an existing test project (Create from existing Test Project) or to create a completely new test project 6. You need to fill all required fields (though only Name and Prefix (used for Test case ID) are required, it is recommend to add a description about your test project there) 7. click on Create ### Creating a Test Specification In order to create Test Cases, you will need to create a Test Specification first. To do this 1. click on Test Specification below the header of the same name 2. click on the name of the Test Project on the bottom left 3. click on the toothed wheel icon on the top right (if you hoover your mouse pointer over it, it will show Actions as quick help). Right to the header Test Suite Operations, a green circle with a white cross (quickhelp: Create), two blue "A"s (Sort alphabetically), an icon with a green arrow pointing to a door (Import), something like a clipboard (Test Spec Document (HMTL) on new window), and an icon for a MS Word™ document (Download Test Spec Document (Pseudo Word)) are now visible. 4. Click on the Create icon 5. enter a name for the Test Suite and details to the field Details (e.g. "Writer" as name with a description á la "Test Suite for Writer") 6. click on Save Your first test suite is created. Now you can create test cases for it. #### Exporting Test Specification If you want to import e.g. the English test specifications to translate them in your language, you can 1. switch the test project on the top right to "EN LibreOffice" 2. go to "Test Specification" in the bottom frame with the header "Test Specification" 3. click on the toothed wheel ("Action") 4. click on the third sign from the right ("Export All Test Suites") 5. you can (de)select the options in the next frame to your likings 6. click on "Export" 7. choose to save it on your hard disk in the next dialog 8. you can either rename this file, so that it does not contain any space or special characters in the file name or leave it as it is. It is your decision. #### Importing Test Specification To import the test specification to your language (say, for translating them to your language): 1. switch back to the Desktop 2. change the language back to your language at the top right 3. switch to the test specification again 4. click on the toothed wheel 5. click on the third button from the left ("Import") 6. choose the file, which you have saved on your hard disk 8. Wait, until the process is finished. Now you can happily start translating the test specification. #### Translating Test Specification Now that you have imported the English test specification, you are now able to translate them into your language. To do this, 1. expand the tree of your imported test specification (it is named "LibreOffice 00") 2. click on the first file on the left 3. click on the toothed wheel on the right 4. click on "Edit" 5. translate "Test Case Title", "Summary" and "Preconditions" 6. click on "Save" at the left bottom of the right side 7. Now you will see the "Step actions" and "Expected results" below your newly translated "Preconditions". Click on the first entry. 8. Translate the text on the left and then the one on the right 9. click on "Save" 10. click on the next entry 11. translate it as well and save it 12. click on the last entry and translate it 13. click on "Save & exit" to finish your work. You have translated your first test specification. ### Creating a Test Case To create a test case 1. click on your newly created Test Suite 2. click on the toothed wheel icon on the top right. Now you will see two rows on the top right: one for Test Suite Operations and one for Test Case Operations. 4. enter a name in Test Case Title (e.g. "Writer start"), a description to Summary and explain what is needed as Preconditions 5. enter a number for the test duration (e.g. "2" for two minutes) 6. Leave the fields "Status", "Importance" and "Execution type" for now. They will be handled later in this guide. 7. You can set Keywords to define which module is concerned by the test. Set it (or them, if you need more then one keyword) 8. Click on "Create" 9. after this you will get the a button labelled Create step 10. click on it 11. enter the step(s) for the test on the left below Step actions 12. enter the expected results on the right below "Expected Results 13. If you want to create further steps for this test, click on Save. If you want to finish your test creation here, click Save & exit. Your first test case is created. If you have a test document that eases the test case (e.g. with data preset so the tester don't have to spend time to fill the document), attach it to the test at the bottom of the screen. ### Assigning Users to a Test Case To be able to test something for a tester, you will first to assign one or more test cases to them. To do this, 1. click on Assign Test Case Execution below the Test Plan contents on the right side of the main window 2. click on the triangle before one of the test specification on the bottom left of the next side to expand it 3. click on one of the test cases You are now able to do either a bulk assignment (meaning, that you are able to assign test cases to more than one tester) in the top of the right window pane, or you can use the field below, right of your test case(s) to assign it to a tester. 1. click in the field right to the test case and select your tester(s) 2. click on Save Now your tester will be able to test something after (s)he logged in in his/her account. ### Creating a Platform As LibreOffice is used on different operating systems and hardware, it may be useful to create these platforms. To do this 1. click on Platform Management below the Test Project header 2. click on the Create platform icon 3. enter a name right to Platform (e.g. Windows) and a short description in the field below (e.g. All Windows related tests) 4. click on Save Your first platform is now ready to be connected to test cases. ### Setting the Urgency of a Test Case Say, you want your testers test your really hot new feature for a test case or test a fix for a really annoying bug or a security fix as soon as possible. To do this, you just need to raise the urgency of a test case from its default Medium setting to High. It is also possible to lower the urgency of a test case to Low. But keeping our first case here, we want to raise the urgency of a test case. To do this, 1. click on Set Urgent Test below the header Test Plan contents 2. click on the triangle in front of your test suite on the left pane. If you have only one test suite created by now, click on this instead. 3. You will now see your test case(s) on the right with radio buttons for the different urgency levels (High, Medium and Low) 4. click on the radio button in front of your test case where you want to raise it. 5. Click the Set urgency for individual test cases icon You have raised the urgency of the test case(s) now. ### Creating Builds As we provide several versions of LibreOffice each year, it is possible to create Test Suites for each build which we want to test. If you want to create a Test Suite for e.g. our "fresh" line (the upcoming 5.3.2), you will 1. go back to the main site of TestLink 2. click on Builds / Releases on the right below Test Plan 3. click on Create 4. enter e.g. 5.3.x.y as a title 5. enter a description below 7. click on Save Your first build is created now and you are now able to select between the different builds during your test run as a (senior) tester. ### Open Questions • There seems no easy way to integrate a BTS (Bug Tracking System) like Bugzilla or Redmine without the risk to open a security hole on your system. TestLink's documentation mentions an entry to its custom_config.inc.php and adding it to your test plan. While it would be possible (for someone with access to the server where TestLink is installed) to add the needed entry to this php file, others may not have this possibility. And adding e.g. Bugzilla via web interface means, you need to 1. copy their example, adopting to your needs, but also add an user name and his/her password to that example before you save it. 2. The author of their documentation mentions that this would only work, if your Bugzilla is not using HTTPS but HTTP. Maybe someone more knowledgeable than the author of this article has found a different solution for this problem. TestLink's documentation mentions a special user for your BTS with the ability to read your RDBMS' data, which may not be possible to create without access to the used BTS -- apart from security considerations. • You will find that it is often mentioned that you are able to export a test report (or whatever else) to the OpenOffice format should be possible. It is -- however -- not shown in our TestLink version nor in Bitnami's. And also exporting a test report to Pseudo Word and -- after downloading it to your hard disk -- opening it in LibreOffice shows only an I/O error. A file $File_name.doc (where$File_name.doc is just a placeholder for the actual file) shows that this is only an HTML file with long lines. Interestingly, using lynx \$File_name.doc opens it in LibreOffice but - alas - you will see the I/O error message anyway ... • TestLink's documentation is rather old. To find out how things work is rather a question of trial & error, searching TestLink's and/or Bitnami's forums, or using a search engine to find a solution for a problem. It would be helpful to find better (OS neutral) instructions somewhere and collect them for further use.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18591830134391785, "perplexity": 2249.406274467342}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00358.warc.gz"}
https://www.arxiv-vanity.com/papers/1310.4290/
Extending Common Intervals Searching from Permutations to Sequences Irena Rusu L.I.N.A., UMR 6241, Université de Nantes, 2 rue de la Houssiniére, BP 92208, 44322 Nantes, France Abstract Common intervals have been defined as a modelisation of gene clusters in genomes represented either as permutations or as sequences. Whereas optimal algorithms for finding common intervals in permutations exist even for an arbitrary number of permutations, in sequences no optimal algorithm has been proposed yet even for only two sequences. Surprisingly enough, when sequences are reduced to permutations, the existing algorithms perform far from the optimum, showing that their performances are not dependent, as they should be, on the structural complexity of the input sequences. In this paper, we propose to characterize the structure of a sequence by the number of different dominating orders composing it (called the domination number), and to use a recent algorithm for permutations in order to devise a new algorithm for two sequences. Its running time is in , where are the sizes of the two sequences, are their respective domination numbers, is the alphabet size and is the number of solutions to output. This algorithm performs better as and/or reduce, and when the two sequences are reduced to permutations (i.e. when ) it has the same running time as the best algorithms for permutations. It is also the first algorithm for sequences whose running time involves the parameter size of the solution. As a counterpart, when and are of and respectively, the algorithm is less efficient than other approaches. ## 1 Introduction One of the main assumptions in comparative genomics is that a set of genes occurring in neighboring locations within several genomes represent functionally related genes [galperin2000s, lathe2000gene, tamames2001evolution]. Such clusters of genes are then characterized by a highly conserved gene content, but a possibly different order of genes within different genomes. Common intervals have been defined to model clusters [UnoYagura], and have been used since to detect clusters of functionally related genes [overbeek1999use, tamames1997conserved], to compute similarity measures between genomes [BergeronSim, AngibaudHow] and to predict protein functions [huynen2000predicting, von2003string]. Depending on the representation of genomes in such applications, allowing or not the presence of duplicated genes, comparative genomics requires for finding common intervals either in sequences or in permutations over a given alphabet. Whereas the most general - and thus useful in practice - case is the one involving sequences, the easiest to solve is the one involving permutations. This is why, in some approaches [AngibaudApprox, angibaud2006pseudo], sequences are reduced to permutations by renumbering the copies of the same gene according to evolutionary based hypothesis. Another way to exploit the performances of algorithms for permutations in dealing with sequences is to see each sequence as a combination of several permutations, and to deal with these permutations rather than with the sequences. This is the approach we use here. In permutations on elements, finding common intervals may be done in time where is the number of permutations and the number of solutions, using several algorithms proposed in the literature [UnoYagura, BergeronK, heber2011common, IR2013]. In sequences (see Table 1), even when only two sequences and of respective sizes and are considered, the best solutions take quadratic time. In a chronological order, the first algorithm is due to Didier [didier2003common] and performs in time and space. Shortly later, Schmidt and Stoye [schmidt2004quadratic] propose an algorithm which needs space, and note that Didier’s algorithm may benefit from an existing result to achieve running time whereas keeping the linear space. Both these algorithms use to define, starting with a given element of it, growing intervals of with fixed leftpoint and variable rightpoint, that are searched for into . Alternative approaches attempt to avoid multiple searches of the same interval of , due to multiple locations, by efficiently computing all intervals in and all intervals in before comparing them. The best running time reached by such an algorithm is in , obtained by merging the fingerprint trees proposed in [kolpakov2008new], where (respectively ) is the number of maximal locations of the intervals in (respectively ), and is the size of the alphabet. The value (and similarly for ) is in and does not exceed . The running times of all the existing algorithms have at least two main drawbacks: first, they do not involve at all the number of output solutions; second, they insufficiently exploit the particularities of the two sequences and, in the particular case where the sequences are reduced to permutations, need quadratic time instead of the optimal time for two permutations on elements. That means that their performances insufficiently depend both on the inherent complexity of the input sequences, and on the amount of results to output. Unlike the algorithms dealing with permutations, the algorithms for sequences lack of criteria allowing them to decide when the progressive generation of a candidate must be stopped, since it is useless. This is the reason why their running time is independent of the number of output solutions. This is also the reason why when sequences are reduced to permutations the running time is very unsatisfactory. The most recent optimal algorithm for permutations [IR2013] proposes a general framework for efficiently searching for common intervals and all of their known subclasses in permutations, and has a twofold advantage, not proposed by other algorithms. First, it permits an easy and efficient selection of the common intervals to output based on two types of parameters. Second, assuming one permutation has been renumbered to be the identity permutation, it outputs all common intervals with the same minimum value together and in increasing order of their maximum value. We use here these properties to propose a new algorithm for finding common intervals in two sequences. Our algorithm strongly takes into account the structure of the input sequences, expressed by the number of different dominating orders (which are permutations) composing the sequence ( for permutations). Consequently, it has a complexity depending both on this structure and on the number of output solutions. It runs in optimal time for two permutations on elements, is better than the other algorithms for sequences composed of few dominating orders and, as a counterpart, it performs less well as the number of composing dominating orders grows. The structure of the paper is as follows. In Section 2 we define the main notions, including that of a dominating order, and give the results allowing us a first simplification of the problem. In Section 3 we propose our approach for finding common intervals in two sequences based on this simplification, for which we describe the general lines. In Sections 4, 5 and 6 we develop each of these general lines and prove correctness and complexity results. Section 7 is the conclusion. ## 2 Preliminaries Let be a sequence of length over an alphabet . We denote the length of by , the set of elements in by , the element of at position , , by and the subsequence of delimited by positions (included), with , by . An interval of is any set of integers from such that there exist with and . Then is called a location of on . A maximal location of on is any location such that neither nor is a location of . When is the identity permutation , we denote , which is also . Note that all intervals of are of this form, and that each interval has a unique location on . When is an arbitrary permutation on elements (denoted in this case), we denote by the function which associates with each element of its position in . For a subsequence of , we also say that it is delimited by its elements and located at positions and . These elements are the delimiters of (note the difference between delimiters, which are elements, and their positions). We now define common intervals of two sequences and of respective sizes and : ###### Definition 1. [didier2003common, schmidt2004quadratic] A common interval of two sequences and over is a set of integers that is an interval of both and . A -maximal location of is any pair of maximal locations of on (this is ) and respectively on (this is ). ###### Example 1. Let The problem we are concerned with is defined below. We assume, without loss of generality, that both sequences contain all the elements of the alphabet, so that . -Common Intervals Searching Input: Two sequences T and S of respective lengths n1 and n2 over an alphabet Σ={1,2,…,p}. Find all (T,S)-maximal locations of common intervals of T and S, without redondancy. To address this problem, assume we add a new element (not in ) at positions 0 and of . Let Succ be the -size array defined for each position with by if and is the smallest with this property (if does not exist, then ). Call the area of the position on the sequence . With ###### Definition 2. [didier2003common] The order associated with a position of , , is the sequence of all elements in ordered according to their first occurrence in . We note . ###### Remark 1. Note that: may be empty, and this holds iff . if is not empty, then its first element is . if is not empty, then contains each element in exactly once, and is thus a permutation on a subset of . In the subsequent, we consider that a pre-treatment has been performed on , removing every element which is equal to , , such that to guarantee that no empty order exists. In this way, the maximal locations are slightly modified, but this is not essential. Let respectively be the positions in of the elements defining , i.e. the position in of their first occurrences in . Now, define to be the ordered sequence of these positions. With ###### Definition 3. Given a sequence and an interval of it, a maxmin location of on is any location of which is left maximal and right minimal, that is, such that neither nor is a location of on . A -maxmin location of is any pair of maxmin locations of on (this is ) and respectively on (this is ). It is easy to see that that maxmin locations and maximal locations are in bijection. We make this more precise as follows. ###### Claim 1. The function associating with each maximal location of an interval in the maxmin location in such that is maximum with the properties and is a bijection. Moreover, if , then may be computed in when and are known. Proof. It is easy to see that by successively removing from the rightmost element as long as it has a copy on its left, we obtain a unique interval such that is a minmax location of , and is maximum with this property. The inverse operation builds when is given. Moreover, if , then . Then, assuming and are known and we want to compute , we have two cases. If , then is the position of the last element in and thus is computed as . If , then is the position in of the element preceding , that is, . In the subsequent, and due to the preceding Claim, we solve the -Common Interval Searching problem by replacing maximal locations with maxmin locations. Using Claim 1, it is also easy to deduce that: ###### Claim 2. [didier2003common] The intervals of are the sets with . As a consequence, the common intervals of and are the sets with , which are also intervals of . With these precisions, Didier’s approach [didier2003common] consists then in considering each order and, in total time (reducible to according to [schmidt2004quadratic]), verifying whether the intervals with are also intervals of . Our approach avoids to consider each order by defining dominating orders which contain other orders, with the aim of focalising the search for common intervals on each dominating order rather than spreading it on each of the orders it dominates. We introduce now the supplementary notions needed by our algorithm. ###### Definition 4. Let be two integers such that . We say that the order dominates the order if is a contiguous subsequence of . We also say that is dominated by . Equivalently, is a contiguous subsequence of and the positions on of their common elements are the same. ###### Definition 5. Let be such that . Order is dominating if it is not dominated by any other order of . The number of dominating orders of is the domination number of . The set of orders of is provided with an order, defined as iff . For each dominating order of , its strictly dominated orders are the orders with such that is dominated by but is not dominated by any order preceding according to . ###### Example 4. The orders of For each dominating order (which is a permutation), we need to record the suborders which correspond to the strictly dominated orders. Only the left and right endpoints of each suborder are recorded, in order to limit the space and time requirements. Then, let the domination function of a dominating order be the partial function defined as follows. Fd(s):=fifthere is someisuch thatOiis strictly dominated byOdandBd[s..f]=Bi. For the other values of , is not defined. Note that , since by definition any dominating order strictly dominates itself. See Figure 2. ###### Example 5. For We know that, according to Claim 2, the common intervals of and must be searched among the intervals or, if we focus on one dominating order and its strictly dominated orders identified by , among the intervals for which is defined and . We formalize this search as follows. ###### Definition 6. Let be a permutation on elements, and be a partial function such that and for all values for which is defined. A location of an interval of is valid with respect to if is defined for and . ###### Claim 3. The -maxmin locations of common intervals of and are in bijection with the triples such that: is a dominating order of the location on of the interval is valid with respect to is a maxmin location of on . Moreover, the triple associated with satisfies : is the dominating order that strictly dominates , and . Proof. See Figure 2. By Claim 2, the common intervals of and are the sets with which are intervals of . We note that the sets are not necessarily distinct, but their locations on , given by , are distinct. Then, the -maxmin locations of common intervals are in bijection with the pairs such that is a maxmin location of the interval on , which are themselves in bijection with the pairs such that the dominating order strictly dominates and is valid with respect to . More precisely, . ###### Corollary 1. Each -maxmin location of a common interval of  and is computable in time if the corresponding triple and the sequence are known. Looking for the -maxmin locations of the common intervals of and thus reduces to finding the -maxmin locations of common intervals for each dominating order and for , whose locations on are valid with respect to the dominating function of . The central problem to solve now is thus the following one (replace by , by and by ): -Guided Common Intervals Searching Input: A permutation P on p elements, a sequence S of length n2 on the same set of p elements, a partial function F:{1,2,…,p}→{1,2,…,p} such that F(1)=p and w≤F(w) for all w such that F(w) is defined. Find all (P,S)- maxmin locations of common intervals of P and S whose locations on P are valid with respect to F, without redondancy. As before, we assume w.l.o.g. that contains all the elements in , so that . Also, we denote . In this paper, we show (see Section 3, Theorem 1) that -Guided Common Intervals Searching may be solved in time and space, where is its number of solutions for and . This running time gives the running time of our general algorithm. However, an improved running time of for solving -Guided Common Intervals Searching would lead to a algorithm for the case of two sequences, improving the complexity of the existing algorithms. ## 3 The approach The main steps for finding the maxmin locations of all common intervals in two sequences using the reduction to -Guided Common Intervals Searching are given in Algorithm 1. Recall that for and we respectively denote their sizes, and their dominating numbers. The algorithms for computing each step are provided in the next sections. To make things clear, we note that the dominating orders (steps 1 and 2) are computed but never stored simultaneously, whereas dominated orders are only recorded as parts of their corresponding dominating orders, using the domination functions. The initial algorithm for computing this information, in step 1 (and similarly in step 2), is too time consumming to be reused in steps 3 and 4 when dominating orders are needed. Instead, minimal information from steps 1 and 2 is stored, which allows to recover in steps 3 and 4 the dominating orders, with a more efficient algorithm. In such a way, we keep the space requirements in , and we perform steps 3, 4, 5 in global time , which is the best we may hope. In order to solve -Guided Common Intervals Searching, our algorithm cuts into dominating orders and then it looks for common intervals in permutations. This is done in steps 2, 4 and 5, as proved in the next theorem. ###### Theorem 1. Steps 2, 4 and 5 in Algorithm 1 solve -Guided Common Intervals Searching with input , and . Moreover, these steps may be performed in global time and space. Proof. Claim 3 and Corollary 1 insure that the -maxmin locations of common intervals of and , in this precise order, are in bijection with (and may be easily computed from) the triples such that is a dominating order of , is valid with respect to and is a maxmin location of on . Note that since is a permutation, each location is a maxmin location. Reducing these triples to those for which is valid w.r.t. , as indicated in step 5, we obtain the solutions of -Guided Common Intervals Searching with input , and . In order to give estimations of the running time and memory space, we refer to results proved in the remaining of this paper. Step 2 takes time and space assuming the orders are not stored (as proved in Section 4, Theorem 3), step 4 needs time and space to successively generate the orders from information provided by step 2 (Section 5, Theorem 4), whereas step 5 takes time and space, where is the number of solutions for -Guided Common Intervals Searching (Section 6, Theorem 6). With ###### Theorem 2. Algorithm 1 solves the -Common Intervals Searching problem in time, where is the size of the solution, and space. Proof. The correctness of the algorithm is insured by Claim 3 and Theorem 1. We now discuss the running time and memory space, once again referring to results proved in the remaining sections. As proved in Theorem 3 (Section 4), Step 1 (and similarly Step 2) takes -time and space, assuming that the dominating orders are identified by their position on and are not stored (each of them is computed, used to find its dominating function and then discarded). The positions corresponding to dominating orders are stored in decreasing order in a stack . The values of the dominating functions are stored as lists, one for each dominating order , whose elements are the pairs , in decreasing order of the value . This representation needs a global memory space of . In step 3 the progressive computation of the dominating orders is done in time and space using the sequence and the list of positions of the dominating orders. The algorithm achieving this is presented in Section 5, Theorem 4. For each dominating order of , the orders of are successively computed in global time and space by the same algorithm, and are only temporarily stored. Step 5 is performed for and in time and space, where is the number of output solutions for -Guided Common Intervals Searching (Section 6, Theorem 6). Then the abovementioned running time of our algorithm easily follows. To simplify the notations, in the next sections the size of is denoted by and its domination number is denoted . The vector Succ, as well as the vectors Prec and defined similarly later, are assumed to be computed once at the beginning of Algorithm 1. ## 4 Finding the dominating and dominated orders of T This task is subdivided into two parts. First, the dominating orders are found as well as, for each of them, the set of positions such that strictly dominates . Thus , where is known but is not known yet. In the second part of this section, we compute . Note that in this way we never store any dominated order, but only its position on and on the dominating order strictly dominating it. This is sufficient to retrieve it from when needed. ### 4.1 Find the positions i such that Oi is dominating/dominated As before, let be the first sequence, with an additional element (new character) at positions 0 and . Recall that we assumed that neighboring elements in are not equal, and that we defined Succ to be the -size array such that, for all with , if and is the smallest with this property (if does not exist, then ). Given a subsequence of , slicing it into singletons means adding the character at the beginning and the end of , as well as a so-called -separator (denoted ) after each element of which is the letter . And this, for each . Call the resulting sequence on . ###### Example 7. With Once is obtained from , successive removals of the separators are performed, and the resulting sequence is still called . Let a slice of be any maximal interval of positions in (recall that ) such that no separator exists in between and with . Note that in this case a -separator exists after and a -separator exists after , because of the maximality of the interval . With as defined above, immediately after has been sliced, every position in forms a slice. ###### Example 8. With and obtained by slicing into singletons as in the preceding example, let now Slices are disjoint sets which evolve from singletons to larger and larger disjoint intervals using separator removals. Two operations are needed, defining - as the reader will easily note - a Union-Find structure: • Remove a -separator, thus merging two neighboring slices into a new slice. This is set union, between sets representing neighboring intervals. • Find the slice a position belongs to. In the algorithm we propose, this function is denoted by . In the following, a position is resolved if its order has already been identified, either as a dominating or as a dominated order. Now, by calling Resolve() in Algorithm 2 successively for all (initially non-resolved), we find the dominating orders of and, for each of them, the positions such that is strictly dominated by . Note that the rightmost position of each dominated by is computed by the procedure RightEnd(), given in Section 4.2. ###### Example 9. With To prove the correctness of our algorithm, we first need two results. ###### Claim 4. Order with is dominated by order iff and and . Proof. Notice that, by definition, the positions in belong to . ”: Properties and are deduced directly from the definitions of an order and of order domination. If the condition is not true, then belongs to but not to (again by the definition of an order), a contradiction. Moreover, if, by contradiction, there is some , occurring respectively in positions and (choose each of them as small as possible with and ), then and , since only the first occurrence of is recorded in . But then and thus is not dominated by , a contradiction. ”: Let . Then the first occurrence of the element in is, by definition, at position . Moreover, by hypothesis and since , we deduce that the first occurrence of the element in is at position . Thus . It remains to show that is contiguous inside . This is easy, since any position in , not in but located between two elements of would imply the existence of an element whose first occurrence in belongs to ; this element would then belong to , and its position to , a contradiction. ###### Claim 5. Let , and assume is dominating. Then is labeled as ”dominated by ” in Resolve() iff is strictly dominated by . Proof. Note that may get a label during Resolve() iff is not resolved at the beginning of the procedure, in which case steps 2-3 of Resolve() insure that is labeled as ”dominating”. By hypothesis, we assume this label is correct. Now, is labeled as ”dominated by ” iff (step 5), and in step 7 we have that is not already resoved, and are in the same slice in the sequence where all the -separators satisfying and have been removed (step 6). The latter of the two conditions is equivalent to saying that contains only characters equal to , and , that is, only characters whose first occurrence in belongs to . This is equivalent to (i.e. no character in appears before ) and (all characters in have a first occurrence not later than ). But then the three conditions on the right hand of Claim 4 are fulfilled, and this means is dominated by . Given that step 8 is executed only once for a given position , that is, when is labeled as resolved, the domination is strict. Now, the correctness of our algorithm is given by the following claim. ###### Claim 6. Assume temporarily that the procedure is empty. Then calling Resolve() successively for correctly identifies the dominating orders and, for each of them, the positions such that strictly dominates . This algorithm takes time and space. Proof. We prove by induction on that, at the end of the execution of Resolve(, we have for all with : is labeled as ”dominating” iff and is dominating is labeled as ”dominated by ” iff and is dominating and is strictly dominated by . Say that a position is used if is unresolved when Resolve() is called. We consider two cases. Case . The position is necessarily used (no position is resolved yet), thus is labeled as ”dominating” (step 3) and no other order will have this label during the execution of Resolve(). Now, is really dominating, as there is no , and property is proved. To prove , recalling that and , we apply Claim 5. Note that since in step 7 is already resolved. Case . Assume by induction the affirmation we want to prove is true before the call of Resolve(). If is not used, that means is already resolved when Resolve() is called, and nothing is done. Properties are already satisfied due to the position such that dominates . Assume now that is used. Then is labeled ”dominating” and we have to show that is really dominating. If this was not the case, then would be strictly dominated by some with , and by the inductive hypothesis it would have been labeled as so (property for ). But this contradicts the assumption that is unresolved at the beginning of Resolve(). We deduce that holds. To prove property , notice that it is necessarily true for and the corresponding dominated orders, by the inductive hypothesis and since Resolve() does not relabel any labeled order. To finish the proof of
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170153737068176, "perplexity": 595.7443285537173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585441.99/warc/CC-MAIN-20211021195527-20211021225527-00324.warc.gz"}
https://www.physicsoflearning.com/edblog/challenge-questions-with-sbg/
# Challenge Questions with SBG One thing that I’ve always struggled with is adding challenging questions to my assessments within a SBG scheme. Like a lot of people using SBG, I use a 4 point scale. The upper limit on this scale is similar to an A, and for the sake of the post I’ll refer to the top proficiency as “mastery”. If a student were to get an A in a course I teach, roughly speaking they would have to be at the mastery level in at least half of the learning objectives, and then only if they don’t have any level 2 grades. The problem with asking a good and interesting challenge question is that only one or two students would likely solve these questions correctly. This means one of two things. Either only one or two students would ever get an A in a course I teach, or I would sort of “ignore” the challenge question in terms of determining mastery. If it’s the former there would be a valid argument that the hardest challenging questions shouldn’t be used to determine mastery, they are a better indicator for something like “sophisticated excellence.” (You can choose your own descriptor here). If it’s the latter, the typical somewhat cynical teen would see through the assessment and naturally ask what the point of the question is, if it doesn’t count towards the grade. For what it’s worth, here are a couple examples of challenging questions. Question 1 would be from a Physics 11 course (intro physics) and Question 2 could be Pre-Calculus 11 (algebra 1? algebra 2?). 1. Two people 120 m apart run towards each other. One person runs at a constant velocity of 5.5 m/s while the other person accelerates at a rate of 0.11 m/s/s. Where do they meet? 2. $$\frac{x^2+2}{x}^2-6(\frac{x^2+2}{x})+5=0$$ Neither is fantastically difficult but I would estimate that maybe 2 out of 30 kids in a typical class of mine may get these. If we practiced questions just like these, the number of students that get these correct would increase but then it takes away from the idea of asking new, interesting challenge questions. How can we go about stretching/challenging all of our students but still use a 4pt grading system?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6408811211585999, "perplexity": 503.7262532343262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00093.warc.gz"}
http://mathhelpforum.com/calculus/64827-derivatives-graphing.html
1. ## Derivatives and Graphing My computer illiterate teacher assigned these primitive and horrible online quizzes... it seems that whatever combination I select I can never get the right answers. The site does a horrible explanation of the concept and never explains why the problem is wrong and only checks the first wrong question, making it impossible to check the others. Can anyone help? Derivatives and Graphing Derivatives and Graphing Derivatives and Graphing 2. Well, a point of inflection is where the concavity changes, so for that first link you posted the points of inflection would be found at A, C, E, and G. Local maxima are the maximum points which are B and F. f'(x) is increasing at point E. You would find this by graphing the derivative of the function and seeing where the slope increases. I couldn't get the right answers for the question where f(x) decrease But the rest after that, are pretty easy. Hopefully this helped; ask if you need more help! 3. Originally Posted by Ineedhelpplz My computer illiterate teacher assigned these primitive and horrible online quizzes... it seems that whatever combination I select I can never get the right answers. The site does a horrible explanation of the concept and never explains why the problem is wrong and only checks the first wrong question, making it impossible to check the others. Can anyone help? Derivatives and Graphing Derivatives and Graphing Derivatives and Graphing
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245841860771179, "perplexity": 979.5910339688581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648178.42/warc/CC-MAIN-20180323044127-20180323064127-00198.warc.gz"}
https://embdev.net/topic/129492
# Forum: ARM programming with GCC/GNU tools Setting up minimal C++ run-time environment Rate this post 0 ▲ useful ▼ not useful I have recently installed WinARM 20070505. Having compiled some test code and simulated it with Insight, I am certain that things are working correctly. Given the following: - I do not need the RTTI capability. - I do not need dynamic memory (malloc,free,new,delete). - I do not need the stream I/O functionality (fopen,iostream,etc.). - I will need exception handling. - I will need to code in ARM 32 bit mode (no thumb code). My question is: "How can I configure the build/link so that the FLASH/RAM space used is minimal?" Currently, even the smallest test program generates a 40k+ image. With exceptions handling, it jumps 2k+. So it seems to me that a lot of code is being linked in that I do not need and there must be a way to strip it out. Any suggestions/tips/examples would be greatly appreciated. Cheers! Daniel Quinz, P. Eng. Acacetus Inc. Rate this post 0 ▲ useful ▼ not useful This is what I use: Compiler: -ffunction-sections -fdata-sections -fno-builtin -fno-rtti -fno-exceptions -fno-unwind-tables -nostartfiles Be careful when using things from newlib (e.g. printf). You need a linker script which takes care of C++ peculiarities, e.g. you need entries *(.text.*) *(.data.*) *(.bss.*) in the respective sections, a .ctors section for *(.ctors.*) and *(.ctors) and a dummy .dtors sections for *(.dtors). Also the startup initialization should call the C++ constructors. Rate this post 0 ▲ useful ▼ not useful Andreas Kaiser wrote: > This is what I use: > > Compiler: > -ffunction-sections > -fdata-sections > -fno-builtin > -fno-rtti > -fno-exceptions > -fno-unwind-tables Thanks for the tip. I added all of these except "-fno-exceptions" and "-fno-unwind-tables" as I need exception handling. There was no difference in the FLASH memory, but the RAM usage went down only 8 bytes. I have found that if I don't define the variable 'device_table_entry' then I get an unresolved reference error to it during link. The functions that reference it are are: _close_r , _read_r , _write_r , _ioctl_r All of these are in the 'newlib_lpc' library. Given that I make do explicit reference anywhere in my code to any of these functions, nor do I use 'printf', etc., I do not see why these are into the application. I have checked the 'crt0.s' file that I use and there are no references to these functions in there either. There must be a way to avoid linking in these functions if they are not needed. Any thoughts? Dan > -nostartfiles > > Be careful when using things from newlib (e.g. printf). I'm not using either (see above text on newlib linking). > You need a linker script which takes care of C++ peculiarities, e.g. you > need entries *(.text.*) *(.data.*) *(.bss.*) in the respective sections, > a .ctors section for *(.ctors.*) and *(.ctors) and a dummy .dtors > sections for *(.dtors). Done. Works fine. > > Also the startup initialization should call the C++ constructors. Done. Works fine. Rate this post 0 ▲ useful ▼ not useful Dan Quinz wrote: > Andreas Kaiser wrote: >> This is what I use: >> >> Compiler: >> -ffunction-sections >> -fdata-sections >> -fno-builtin >> -fno-rtti >> -fno-exceptions >> -fno-unwind-tables > > Thanks for the tip. I added all of these except "-fno-exceptions" and > "-fno-unwind-tables" as I need exception handling. There was no > difference in the FLASH memory, but the RAM usage went down only 8 > bytes. -ffunction-sections and -fdata-sections. If you use the compiler usualy use: CFLAGS += -Wl,-Map=$(TARGET).map,--cref,--gc-sections) > I have found that if I don't define the variable 'device_table_entry' > then I get an unresolved reference error to it during link. The > functions that reference it are are: > > _close_r , _read_r , _write_r , _ioctl_r > > All of these are in the 'newlib_lpc' library. The newlib-lpc provides the syscalls for the newlib and so indirectly for the libstdc++.a. newlib-lpc needs the device_table. The use has to provide it. I'm not sure why you get an error about "device_table_entry" since this is basicly just a struct defined in dev_ctrl.h. > Given that I make do explicit reference anywhere in my code to any of > these functions, nor do I use 'printf', etc., I do not see why these are > being linked > into the application. As far as I know the C++ stdlib depends on stdio-functions but I'm not sure if this can be disabled somehow. A map-file should provide more information which functions of the library depend on others. The main reason for the large binary size might be that the C++ stdlib "pulls in" the complete stdio-support including floating-point (IRC this is around 30kBytes for "plain" newlib without C++). > I have checked the 'crt0.s' file that I use and there are no references > to these functions in there either. Maybe the dependencies are "hidden" and from another library like libstdc++.a. > There must be a way to avoid linking in these functions if they are not > needed. Maybe there is some options like the ones for RTTI but I have not found it yet. Maybe the Toolchain (esp. the C++ stdlib) has to be rebuild with special options. Sorry, a lot of "maybes" and no real solution but hopefully some useful hints. Martin Thomas Rate this post 0 ▲ useful ▼ not useful Check Section "Reducing the Overhead of C++" from http://www.embedded.com/columns/technicalinsights/201001729 Rate this post 0 ▲ useful ▼ not useful > Given that I make do explicit reference anywhere in my code to any of > these functions, nor do I use 'printf', etc., I do not see why these are > being linked into the application. Temporarily remove/rename the newlib. The linker should tell you. Rate this post 0 ▲ useful ▼ not useful Thanks Martin, I added the options and recompiled. The resulting code reduction was about 4500 bytes. Little to no effect on the RAM usage (10 bytes), but I am happier. I will continue to research a bit more into what the newlib_lpc library is doing as it seems this is the culprit for the external references. Cheers! Dan Martin Thomas wrote: > Dan Quinz wrote: >> Andreas Kaiser wrote: >>> This is what I use: >>> >>> Compiler: >>> -ffunction-sections >>> -fdata-sections >>> -fno-builtin >>> -fno-rtti >>> -fno-exceptions >>> -fno-unwind-tables >> >> Thanks for the tip. I added all of these except "-fno-exceptions" and >> "-fno-unwind-tables" as I need exception handling. There was no >> difference in the FLASH memory, but the RAM usage went down only 8 >> bytes. > > Make sure to add the linker-option -gc-sections when using > -ffunction-sections and -fdata-sections. If you use the compiler > frontend (arm-elf-gcc) for linking - recommended - add this option: > -Wl,--gc-sections. If you already have some optione passed to the linker > (for exampel already -Wl entry in CFLAGS) just add --gc-sections (I > usualy use: CFLAGS += -Wl,-Map=$(TARGET).map,--cref,--gc-sections) > >> I have found that if I don't define the variable 'device_table_entry' >> then I get an unresolved reference error to it during link. The >> functions that reference it are are: >> >> _close_r , _read_r , _write_r , _ioctl_r >> >> All of these are in the 'newlib_lpc' library. > > The newlib-lpc provides the syscalls for the newlib and so indirectly > for the libstdc++.a. newlib-lpc needs the device_table. The use has to > provide it. I'm not sure why you get an error about "device_table_entry" > since this is basicly just a struct defined in dev_ctrl.h. > >> Given that I make do explicit reference anywhere in my code to any of >> these functions, nor do I use 'printf', etc., I do not see why these are >> into the application. > > As far as I know the C++ stdlib depends on stdio-functions but I'm not > sure if this can be disabled somehow. A map-file should provide more > information which functions of the library depend on others. The main > reason for the large binary size might be that the C++ stdlib "pulls in" > the complete stdio-support including floating-point (IRC this is around > 30kBytes for "plain" newlib without C++). > >> I have checked the 'crt0.s' file that I use and there are no references >> to these functions in there either. > > Maybe the dependencies are "hidden" and from another library like > libstdc++.a. > >> There must be a way to avoid linking in these functions if they are not >> needed. > > Maybe there is some options like the ones for RTTI but I have not found > it yet. Maybe the Toolchain (esp. the C++ stdlib) has to be rebuild with > special options. > > Sorry, a lot of "maybes" and no real solution but hopefully some useful > hints. > > Martin Thomas Rate this post 0 ▲ useful ▼ not useful Thanks again Martin. malloc/free/new/delete mad the atexit func. The net effect is a code reduction of about 100 bytes, so I'm still not near to removing the "bottom of the iceberg". As stated before, I will continue to dig into the libraries to try to get insight as to what exactly is being linked in and where. Cheers! Dan Martin Thomas wrote: > Check Section "Reducing the Overhead of C++" from > http://www.embedded.com/columns/technicalinsights/201001729 Rate this post 0 ▲ useful ▼ not useful Thanks to all for your feedback. I apprecite it! Cheers! Dan Rate this post 0 ▲ useful ▼ not useful Dan Quinz wrote: > malloc/free/new/delete mad the atexit func. The net effect is a code > reduction of about 100 bytes, so I'm still not near to removing the > "bottom of the iceberg". The article helped me reduce my code a little bit as well, but I'd also like to get some more out of it. Is there anyway to stop it from pulling in an entire library, or at least make it intelligently use my function and not include it from the library? For example From my map file: 0x534 c:/winarm/bin/../lib/gcc/arm-elf/4.1.2/../../../../arm-elf/lib/thumb/int erwork\libc.a(lib_a-mallocr.o) 0x000178a0 _malloc_r If I attempt to compile the following to override the library function: extern "C" void *_malloc_r(struct _reent *, size_t) {return (void *)0; } I get this error: 1>c:/winarm/bin/../lib/gcc/arm-elf/4.1.2/../../../../arm-elf/lib/thumb/i nterwork\libc.a(lib_a-mallocr.o): In function _malloc_r': 1>mallocr.c:(.text+0x0): multiple definition of _malloc_r' 1>.out/Reduced_Newlib.o:../../../../src/HwDrivers/ARM/Reduced_Newlib.cpp :31: first defined here 1>c:/winarm/bin/../lib/gcc/arm-elf/4.1.2/../../../../arm-elf/bin/ld.exe: Warning: size of symbol _malloc_r' changed from 4 in .out/Reduced_Newlib.o to 1332 in c:/winarm/bin/../lib/gcc/arm-elf/4.1.2/../../../../arm-elf/lib/thumb/int erwork\libc.a(lib_a-mallocr.o) I am not using any sort of malloc, or free or anything of that sort in my code, and it would be great if it stopped including these libraries... Rate this post 0 ▲ useful ▼ not useful Jim Kaz wrote: > If I attempt to compile the following to override the library function: > extern "C" void *_malloc_r(struct _reent *, size_t) {return (void *)0; } Well, I'm not entirely sure what I changed, but while overriding other functions, I decided to try _malloc_r again to see if I could get it working at all. The following works. Looks similar to what I posted before huh? Yea, I dunno? shrug extern "C" void *_malloc_r(struct _reent *, size_t) {return (void *)0; } Rate this post 0 ▲ useful ▼ not useful Dan Quinz wrote: > malloc/free/new/delete mad the atexit func. The net effect is a code > reduction of about 100 bytes, so I'm still not near to removing the > "bottom of the iceberg". After playing around with my code some more, here are some more things that you can override: extern "C" void *_malloc_r(struct _reent *, size_t) {return (void *)0; } extern "C" void _free_r(struct _reent *, void*) {} extern "C" void *_realloc_r(struct _reent *, void*, size_t) {return (void *)0; }; extern "C" void *realloc(void *, size_t) {return (void *)0; }; extern "C" void *_calloc_r(struct _reent *, size_t, size_t) {return (void *)0; }; These functions should entirely remove malloc.o from your map file and will drop your flash usage by another couple kilobytes, and should drop Additionally, if you don't use any string manipulation: extern "C" int _vfprintf_r(struct _reent *, FILE*, const char*, void*) {return 0;} That frees up about 14K of flash, maybe 100 bytes of RAM. I'm looking for a way to remove alloc.o, which means I need to override __cxa_free_exception and _cxa_allocate_exception, but I'm having difficulty in finding their declarations. This will free up a few hundred bytes of flash and around a K or 2 of RAM. Thanks for posting that article Martin! Its been a big help. I never knew how to override the functions that got linked in, and this showed me exactly how to do it. Thanks! Rate this post 0 ▲ useful ▼ not useful Something I came across. The article does __aeabi_atexit, where as I think we want to use __cxa_atexit, since we are using GCC. In addition, we need to use the -fuse-cxa-atexit flag in the compiler. I have yet to actually get this to work though, since I keep getting the following error: 1>.out/HwMemSystem.o: In function __static_initialization_and_destruction_0': 1>../../../../src/MemSystem/HwMemSystem.cpp:145: undefined reference to __dso_handle' disabled the 'eh_' library includes, which would include eh_alloc, thus freeing up about 2K of RAM, but it doesn't seem to do that. The output file is exactly the same size, sigh. Hopefully some of my digging will be useful to you Dan. Rate this post 0 ▲ useful ▼ not useful Thanks Jim! Your redefinitions for the memory allocation routined helped quite a bit! I dropped the flash down by 4228 bytes and the ram by 1040. There was no effect when I redefined the vprintf function though. I also experimented with removing the global 'device_table_entry' and Code: #if 0 const struct device_table_entry * device_table[] = { & com1 , // stdin & com1 , // stdout & com1 , // stderr // 0 } ; #endif c:/winarm.20070505/bin/../lib/gcc/arm-elf/4.1.2/../../../../arm-elf/lib\ libnewlib-lpc.a(_close_r.o): In function _close_r': C:\WinARM\utils\newlib_lpc_rel5a_src/_close_r.c:71: undefined reference to device_table' c:/winarm.20070505/bin/../lib/gcc/arm-elf/4.1.2/../../../../arm-elf/lib\ In function _read_r': to device_table' c:/winarm.20070505/bin/../lib/gcc/arm-elf/4.1.2/../../../../arm-elf/lib\ libnewlib-lpc.a(_write_r.o): In function _write_r': C:\WinARM\utils\newlib_lpc_rel5a_src/_write_r.c:103: undefined reference to device_table' c:/winarm.20070505/bin/../lib/gcc/arm-elf/4.1.2/../../../../arm-elf/lib\ libnewlib-lpc.a(_ioctl_r.o): In function _ioctl_r': C:\WinARM\utils\newlib_lpc_rel5a_src/_ioctl_r.c:75: undefined reference to device_table' So it eppears there is something in one of the libraries that is causing the stdio stuff to be linked in. I don't use any of it explicitly, so it must be coming from the libraries somewhere. Cheers! Dan Jim Kaz wrote: > Something I came across. The article does __aeabi_atexit, where as I > think we want to use __cxa_atexit, since we are using GCC. In addition, > we need to use the -fuse-cxa-atexit flag in the compiler. I have yet to > actually get this to work though, since I keep getting the following > error: > > 1>.out/HwMemSystem.o: In function > __static_initialization_and_destruction_0': > 1>../../../../src/MemSystem/HwMemSystem.cpp:145: undefined reference to > __dso_handle' > > > disabled the 'eh_' library includes, which would include eh_alloc, thus > freeing up about 2K of RAM, but it doesn't seem to do that. The output > file is exactly the same size, sigh. > > > Hopefully some of my digging will be useful to you Dan.` • $formula (LaTeX syntax)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24856975674629211, "perplexity": 9815.401296509306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146341.16/warc/CC-MAIN-20200226084902-20200226114902-00154.warc.gz"}
http://www.digitalmars.com/d/archives/digitalmars/D/Moving_to_D_125828.html
## digitalmars.D - Moving to D • Adrian Mercieca (13/13) Jan 02 2011 Hi everyone, • bearophile (10/19) Jan 02 2011 Maybe. • Walter Bright (3/6) Jan 02 2011 Tango does not exist for D2. • Lutger Blijdestijn (13/32) Jan 02 2011 64 bit support is the main focus of dmd development at the moment. I tak... • bioinfornatics (3/3) Jan 02 2011 LDC exist for D2: https://bitbucket.org/prokhin_alexey/ldc2 • Nick Sabalausky (22/33) Jan 02 2011 Personally, I love D and can't stand Go (the lack of exceptions, generic... • bioinfornatics (3/3) Jan 02 2011 they are a D2 port for tango. It is not done. take source here: git clon... • Vladimir Panteleev (15/19) Jan 03 2011 How many people are working on this port? How many people will be • Ulrik Mikaelsson (14/24) Jan 03 2011 There aren't a lot of additions to D1 Tango nowadays, partly because • Adrian Mercieca (10/29) Jan 04 2011 Hi, • bearophile (9/13) Jan 05 2011 DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 b... • Nick Sabalausky (23/47) Jan 05 2011 OTOH, the design of D and Phobos2 strongly encourages fast techniques su... • Andrej Mitrovic (3/5) Jan 05 2011 I wonder if the reason for that is Optlink (iirc it doesn't support • Jonathan M Davis (6/12) Jan 05 2011 I believe that it's that and the fact that apparenly 64-bit stuff or Win... • Jacob Carlborg (8/64) Jan 05 2011 And sometimes Mac OS X is *slightly* ahead of the other OSes, Tango has • bearophile (5/11) Jan 05 2011 A quotation from here: • Nick Sabalausky (12/23) Jan 05 2011 Automatically accepting all submissions immediately into the main line w... • Walter Bright (9/19) Jan 05 2011 That's pretty much what I'm afraid of, losing my grip on how the whole t... • Caligo (6/9) Jan 06 2011 should have) commit rights, and they would send pull requests. You or o... • bearophile (8/14) Jan 06 2011 I agree with all you have said, I was not suggesting a wild west :-) • Nick Sabalausky (5/16) Jan 06 2011 I'm not sure I see how that's any different from everyone having "create... • Ulrik Mikaelsson (30/40) Jan 06 2011 or one • Walter Bright (2/20) Jan 06 2011 I don't, either. • bearophile (4/5) Jan 06 2011 Then it's a very good moment for starting to seeing/understanding this a... • Russel Winder (25/46) Jan 06 2011 (and • Walter Bright (10/17) Jan 06 2011 A couple months back, I did propose moving to git on the dmd internals m... • Jesse Phillips (3/27) Jan 06 2011 Git does not have its own merge tool. You are free to use meld. Though t... • Jesse Phillips (2/11) Jan 06 2011 Just realized you probably meant more than just resolving conflicts. And... • Michel Fortin (20/41) Jan 06 2011 I probably wasn't on the list at the time. I'm certainly interested, • Walter Bright (4/33) Jan 06 2011 Eh, that's inferior. The svn will will highlight what part of a line is • Lutger Blijdestijn (4/8) Jan 08 2011 What version are you on? I'm using 1.3.2 and its supports git and mercur... • Walter Bright (3/12) Jan 08 2011 The one that comes with: sudo apt-get meld • Michel Fortin (11/25) Jan 08 2011 I know you had your reasons, but perhaps it's time for you upgrade to a • Walter Bright (10/16) Jan 08 2011 I know. The last time I upgraded Ubuntu in place it f****d up my system ... • Vladimir Panteleev (15/16) Jan 08 2011 sudo apt-get build-dep meld • Andrej Mitrovic (3/19) Jan 08 2011 Now do it on Windows!! • Walter Bright (2/15) Jan 08 2011 Thanks, I'll give it a try! • Christopher Nicholson-Sauls (8/24) Jan 09 2011 I say you should consider moving away from *Ubuntu and to something more • Jonathan M Davis (18/43) Jan 09 2011 Yeah well, much as I like gentoo, if he didn't like dealing with the pai... • Gour (14/15) Jan 09 2011 Jonathan> Personally, I got sick of it and moved on. Currently, I use • Andrej Mitrovic (5/5) Jan 09 2011 I'm keeping my eye on BeyondCompare. But it's not free. It's $80 for • Andrej Mitrovic (4/9) Jan 09 2011 There's at least one caveat though: it doesn't natively support D • retard (4/30) Jan 10 2011 Gentoo really needs a high-end computer to run fast. FWIW, the same meld... • Walter Bright (21/39) Jan 18 2011 It doesn't work: • KennyTM~ (2/41) Jan 18 2011 You should use LF ending, not CRLF ending. • Walter Bright (120/121) Jan 19 2011 I never thought of that. Fixing that, it gets further, but still innumer... • Vladimir Panteleev (9/13) Jan 19 2011 If apt-get update doesn't fix it, only an update will - looks like your ... • Walter Bright (2/15) Jan 19 2011 Yeah, I figured that. Thanks for the try, anyway! • retard (13/20) Jan 19 2011 I already told you in message digitalmars.d:126586 • retard (7/23) Jan 19 2011 So.. the situation is so bad that you can't install ANY packages anymore... • Gour (10/13) Jan 19 2011 That's why we wrote it would be better to use some rolling release • Vladimir Panteleev (8/16) Jan 19 2011 Walter needs something he can install and get on with compiler hacking. ... • Jeff Nowakowski (17/20) Jan 19 2011 https://wiki.archlinux.org/index.php/FAQ : • Gary Whatmore (9/32) Jan 19 2011 This is something the Gentoo and Arch fanboys don't get. They don't have... • Gour (31/48) Jan 19 2011 First of all I spent >5yrs with Gentoo before jumping to Arch and • Gour (26/33) Jan 19 2011 I've feeling that you just copied the above from FAQ and never • Jeff Nowakowski (16/28) Jan 20 2011 No, I haven't tried it. I'm not going to try every OS that comes down • Jonathan M Davis (14/49) Jan 20 2011 There is no question that Arch takes more to manage than a number of oth... • Gour (30/44) Jan 20 2011 Then please, without any offense, do not give advises about something • Jeff Nowakowski (27/41) Jan 20 2011 Please yourself. I quoted from the FAQ from the distribution's main • Gour (22/39) Jan 20 2011 Arch simply does not offer false promises that system will "Just • retard (27/80) Jan 20 2011 It's the same in Ubuntu. You can install the minimal server build and • Andrew Wiley (9/18) Jan 20 2011 Ironically, I did this a few years back with an Arch box that was setup, • Walter Bright (18/21) Jan 21 2011 I finally did do it, but as a clean install. I found an old 160G drive, ... • Andrei Alexandrescu (8/13) Jan 21 2011 I think we must change to our own routines anyway. One strategic • Walter Bright (3/14) Jan 22 2011 We can also make our own conversion routines consistent, pure, thread sa... • Gour (24/27) Jan 21 2011 On Fri, 21 Jan 2011 22:35:55 -0800 • Walter Bright (9/12) Jan 22 2011 OSX is the only OS (besides DOS) I've had that had painless upgrades. Wi... • spir (14/22) Jan 22 2011 Same in my experience. I had to recently re-install from scratch my • retard (8/24) Jan 22 2011 Don't blame Ubuntu, http://en.wikipedia.org/wiki/Mozilla_Sunbird • Daniel Gibson (6/30) Jan 22 2011 Ubuntu doesn't include Lightning, either. • Walter Bright (2/4) Jan 22 2011 I'm really not interested in Google owning my private data. • Andrei Alexandrescu (8/12) Jan 22 2011 Google takes email privacy very seriously. Only last week they fired an • Walter Bright (12/16) Jan 22 2011 That's good to know. On the other hand, Google keeps information forever... • Vladimir Panteleev (9/12) Jan 22 2011 Hi Walter, have you seen this yet? It's an article on how to import your... • spir (7/16) Jan 22 2011 Yes, lightning seems to have been the successor mozilla project to • Walter Bright (3/4) Jan 22 2011 Thanks for finding that. But I think I'll stick for now with the ipod's • retard (4/9) Jan 22 2011 Does the new Ubuntu overall work better than the old one? Would be • Daniel Gibson (3/12) Jan 22 2011 And is the support for the graphics chip better, i.e. can you use full • Walter Bright (2/4) Jan 22 2011 Yes, it recognized my resolution automatically. That's a nice improvemen... • Walter Bright (4/6) Jan 22 2011 I haven't tried the sound yet, but the video playback definitely is bett... • retard (7/16) Jan 22 2011 Ubuntu probably uses Compiz if you have enabled desktop effects. This • spir (5/8) Jan 22 2011 Same for me ;-) • Jonathan M Davis (18/33) Jan 08 2011 A while back I took to putting /home on a separate partition from the ro... • Walter Bright (2/4) Jan 08 2011 I think it's less than a year old. • Jonathan M Davis (6/11) Jan 08 2011 Hmm. I thought that someone said that the version you were running was f... • Russel Winder (50/82) Jan 09 2011 so • retard (18/37) Jan 10 2011 Ubuntu has a menu entry for "restricted drivers". It provides support fo... • Walter Bright (13/37) Jan 11 2011 My mobo is an ASUS M2A-VM. No graphics cards, or any other cards plugged... • Andrej Mitrovic (37/46) Jan 11 2011 That's my biggest problem with Linux. Having technical problems is not • Walter Bright (6/9) Jan 11 2011 The worst ones begin with "you might try this..." or "I think this might... • Daniel Gibson (20/29) Jan 11 2011 Those results are often in big forums like ubuntuforums.org that get a l... • Andrej Mitrovic (5/5) Jan 11 2011 Google does seem to take into account whatever information it has on • Jesse Phillips (4/10) Jan 11 2011 Best place to go for ranking information on your website: • Nick Sabalausky (7/39) Jan 11 2011 That's probably one of the biggest things that's always bothered me abou... • Christopher Nicholson-Sauls (14/22) Jan 12 2011 Nobody. • Russel Winder (15/17) Jan 11 2011 dows=20 • retard (8/22) Jan 10 2011 One thing came to my mind. Unless you're using Ubuntu 8.04 LTS, your • Walter Bright (7/15) Jan 11 2011 What annoyed the heck out of me was the earlier (7.xx) version of Ubuntu... • Jacob Carlborg (5/48) Jan 08 2011 Have you heard of gitx? I suggest you take a look at it: • Russel Winder (16/22) Jan 08 2011 o • Jesse Phillips (4/7) Jan 08 2011 Funny thing, gitk looks better on Windows. I don't care though. My frien... • Jacob Carlborg (5/16) Jan 08 2011 Doesn't the Tk widget set look hideous on all platforms. I can't • Jonathan M Davis (8/25) Jan 08 2011 Probably because you don't need much installed for them to work. About a... • Andrej Mitrovic (2/2) Jan 08 2011 Git Extensions looks pretty sweet for use on Windows (I haven't tried • Walter Bright (2/15) Jan 06 2011 • Jesse Phillips (7/13) Jan 06 2011 • Jean Crystof (5/18) Jan 11 2011 Huh! You should seriously consider upgrading. If you are running any kin... • Jean Crystof (9/12) Jan 11 2011 ASUS M2A-VM has 690G chipset. Wikipedia says: • Andrej Mitrovic (2/2) Jan 11 2011 Did you hear that, Walter? Just buy a 500$ video card so you can watch • Jean Crystof (2/4) Jan 11 2011 Dear Sir, did you even open the link? It's the cheapest Nvidia card I co... • Andrej Mitrovic (6/6) Jan 11 2011 Notice the smiley face -> :D • Jean Crystof (5/13) Jan 11 2011 That's not true. I suggested a low end card because if he's using integr... • Andrej Mitrovic (2/3) Jan 12 2011 You've never had computer equipment fail on you? • Walter Bright (14/18) Jan 12 2011 I've had a lot of computer equipment. • Vladimir Panteleev (8/9) Jan 12 2011 Let me guess, all cheap rubber-domes? Maybe you should have a look at so... • Walter Bright (3/11) Jan 12 2011 Yup, the $9.99 ones. They also get things spilled on them, why ruin an e... • Caligo (6/19) Jan 13 2011 http://www.daskeyboard.com/ • Nick Sabalausky (9/20) Jan 13 2011 I've got a$6 one I've been using for years, and I frequently beat the s... • Stanislav Blinov (8/27) Jan 14 2011 I felt very depressed when my first keyboard failed - the rubber shocks • Daniel Gibson (5/16) Jan 13 2011 There are washable keyboards, e.g. • Walter Bright (5/7) Jan 13 2011 I know. But what I do works for me. I happen to like the action on the c... • Andrej Mitrovic (15/15) Jan 13 2011 Lol Walter you're like me. I keep buying cheap keyboards all the time. • Walter Bright (9/19) Jan 14 2011 My preferred keyboard layout has the \ key right above the Enter key. Th... • Daniel Gibson (3/13) Jan 14 2011 Had something like that once, too. • Andrej Mitrovic (10/10) Jan 13 2011 I forgot to mention though, do *not* open up a MX518 unless you want • Walter Bright (2/5) Jan 14 2011 No prob. I've got some tools in the basement that will take care of that... • Jesse Phillips (3/7) Jan 12 2011 Wow, I have never had a keyboard fail. I'm stilling using my first keybo... • spir (12/31) Jan 13 2011 Same for me. Cheap hardware as well; and as standard as possible. • Sean Kelly (2/5) Jan 13 2011 I don't overclock any more after a weird experience I had overclocking a... • Nick Sabalausky (14/33) Jan 13 2011 My failure list from most to least would be this: • Walter Bright (4/19) Jan 13 2011 My printer problems ended (mostly) when I finally spent the bux and got ... • Robert Clipsham (13/16) Jan 14 2011 Now this surprises me, printing has been the least painless thing I've • Daniel Gibson (11/24) Jan 14 2011 This really depends on your printer, some have good Linux support and • Walter Bright (3/6) Jan 14 2011 Yeah, but I bought an *HP* laserjet, because I thought everyone supporte... • Daniel Gibson (10/17) Jan 14 2011 Yes, the HP Laserjets usually have really good support with PCL and • Walter Bright (3/21) Jan 14 2011 Nyuk nyuk nyuk • Daniel Gibson (15/34) Jan 14 2011 The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer... • Walter Bright (7/20) Jan 14 2011 The HP 2300D is parallel port. (The "D" stands for duplex, an extra cost... • Daniel Gibson (17/37) Jan 14 2011 HP says[1] it has also got USB, if their docs are correct for your versi... • Jean Crystof (13/18) Jan 14 2011 This thread sure was interesting. Now what I'd like is if Walter could p... • Jean Crystof (2/25) Jan 14 2011 I tried to find the package lists for Ubuntu 8.10 (intrepid), but they'r... • Andrei Alexandrescu (5/23) Jan 14 2011 The darndest thing is I have Ubuntu 8.10 on my laptop with KDE 3.5 on • Walter Bright (3/4) Jan 14 2011 To be fair, it was about the process of upgrading in place to Ubuntu 8.1... • Gour (13/16) Jan 14 2011 • retard (24/44) Jan 14 2011 I'm not sure if Walter's Ubuntu version already has this, but the latest... • Russel Winder (18/26) Jan 14 2011 =20 • Daniel Gibson (4/17) Jan 14 2011 hplip on Linux should support it when connected via Parallel Port (but, • retard (67/73) Jan 14 2011 My list is pretty much the same. I bought a (Toshiba IIRC) dot matrix • Andrej Mitrovic (7/9) Jan 14 2011 I've never thought of this. I did have a couple of failed fans over • Daniel Gibson (15/38) Jan 14 2011 Yup, one should never cheap out on PSUs. • Walter Bright (1/1) Jan 14 2011 Thanks for the fan info. I'm going to go oil my fans! • Nick Sabalausky (73/95) Jan 15 2011 A long time ago we got, for free, an old Okidata printer that some schoo... • retard (18/40) Jan 15 2011 That's true. But it's also true that PSU efficiency and power have • Andrei Alexandrescu (6/8) Jan 15 2011 I'd read some post of Nick and think "hmm, now that's a guy who follows • Nick Sabalausky (8/15) Jan 15 2011 Heh :) Well, I can spend no money and stick with my current 21" CRT tha... • Jonathan M Davis (4/22) Jan 15 2011 Why would you _want_ more than one resolution? What's the use case? I'd ... • Daniel Gibson (12/34) Jan 15 2011 Maybe for games (if your PC isn't fast enough for full resolution or the... • Walter Bright (2/5) Jan 15 2011 The latter two issues loomed large for me. I was very glad to upgrade to... • Nick Sabalausky (34/75) Jan 15 2011 There's two reasons it's good for games: • Nick Sabalausky (4/11) Jan 15 2011 As for size, well, I have enough space, so at least for me that's a • Adam Ruppe (16/16) Jan 16 2011 I stuck with my CRT for a long time. What I really liked about it • Walter Bright (5/12) Jan 15 2011 I have 1900x1200 on LCD, and I think it was around $325. It's a Hanns-G ... • Andrej Mitrovic (5/5) Jan 15 2011 For games? I just switch to software rendering. I get almost the same • retard (25/53) Jan 16 2011 The standard resolution for new flat panels has been 1920x1080 or • Andrej Mitrovic (7/7) Jan 16 2011 I need to get a better LCD/LED display one of these days. Right now • Russel Winder (17/24) Jan 16 2011 It may not be the monitor, it may be the operating system setting. In • Andrej Mitrovic (15/21) Jan 16 2011 Yes, I know about those. Linux has arguably more settings to choose • Nick Sabalausky (24/41) Jan 16 2011 I just mean uniformly scaled UI elements. For instance, you can usually • Andrei Alexandrescu (9/50) Jan 16 2011 It's a legacy issue. Clearly everybody except you is using CRTs for • Nick Sabalausky (17/47) Jan 16 2011 Wow, you really seem to be taking a lot of this personally. • Andrei Alexandrescu (23/71) Jan 16 2011 You have a good point if playing vintage games is important to you. • so (11/17) Jan 16 2011 He was quite clear on that i think, this is not like natural selection. • Andrei Alexandrescu (3/17) Jan 16 2011 s/is using/is not using/ • Walter Bright (2/3) Jan 16 2011 I always worried about that. Nobody actually found anything wrong, but s... • Andrej Mitrovic (6/6) Jan 16 2011 With CRTs I could spend a few hours in front of the PC, but after that • Nick Sabalausky (8/14) Jan 16 2011 I use a light-on-dark color scheme. Partly because I like the way it loo... • Walter Bright (10/16) Jan 16 2011 I need reading glasses badly, but fortunately not for reading a screen. ... • Jonathan M Davis (11/21) Jan 16 2011 Great! That will make it _much_ easier to make check-ins while working o... • Walter Bright (3/5) Jan 16 2011 Yes. And there's the large issue that being on github simply makes contr... • Daniel Gibson (3/9) Jan 16 2011 How will the licensing issue (forks of the dmd backend are only allowed • Walter Bright (10/12) Jan 16 2011 It shouldn't be a problem as long as those forks are for the purpose of • Robert Clipsham (9/21) Jan 17 2011 Speaking of which, are you able to remove the "The Software was not • Walter Bright (2/7) Jan 17 2011 Consider it like the DNA we all still carry around for fish gills! • Robert Clipsham (5/12) Jan 17 2011 I don't know about you, but I take full advantage of my gills! • Brad Roberts (10/18) Jan 17 2011 In all seriousness, the backend license makes dmd look very strange. It... • Robert Clipsham (6/24) Jan 18 2011 Make that a nice open source license and I'm happy to throw some money • Johann MacDonagh (6/12) Jan 18 2011 I'm sure you've already seen this, but Pro Git is probably the best • retard (4/11) Jan 16 2011 That's a good point. I've already forgotten how much eye strain the old • retard (3/8) Jan 16 2011 It's like the cell phone studies. Whether they're causing brain tumors o... • Bruno Medeiros (10/25) Jan 28 2011 Actually, not entirely true, although not for the reasons of old games. • Bruno Medeiros (23/29) Jan 28 2011 This reason was valid at least at some point in time, for me it actually... • Lutger Blijdestijn (3/21) Jan 16 2011 Actually nearly all lcds below 600$-800$price point (tn-panels) have qu... • retard (10/34) Jan 16 2011 There are also occasional special offers on IPS flat panels. • Andrei Alexandrescu (18/36) Jan 16 2011 My last CRT was a 19" from Nokia, 1600x1200, top of the line. Got it for... • so (1/5) Jan 16 2011 This is just... wrong. • Nick Sabalausky (13/39) Jan 16 2011 No clue. It's my desktop system, so I haven't had a reason to pick up th... • Andrei Alexandrescu (10/31) Jan 16 2011 Finding recent research on dangers of CRTs on eyes is difficult to find • Nick Sabalausky (7/10) Jan 16 2011 FWIW, when computer monitors regularly use the pixel density that the ne... • Walter Bright (12/17) Jan 16 2011 I bought the iPod with the retina display. That gizmo has done the impos... • Nick Sabalausky (14/31) Jan 16 2011 It's not as clearcut as you may think. One of the first results for "CRT... • Walter Bright (3/9) Jan 16 2011 My CRTs would gradually get fuzzier over time. It was so slow you didn't... • retard (47/65) Jan 16 2011 The CRTs have a limited lifetime. It's simply a fact that you need to • Nick Sabalausky (14/22) Jan 16 2011 Continuing to use my 21" CRT costs me nothing. • Jonathan M Davis (18/33) Jan 16 2011 I've heard that the eye fatigue at 60 Hz is because it matches electrici... • Jonathan M Davis (12/20) Jan 15 2011 But don't you just _hate_ the fact that lightbulbs don't smell? How can ... • =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= (23/42) Jan 16 2011 ut.=20 • Nick Sabalausky (4/10) Jan 12 2011 Rediculous. All of the video cards I'm using are ultra-cheap ones that a... • Nick Sabalausky (4/15) Jan 12 2011 They're cheap because they have lower clock speeds, fewer features, and ... • Andrej Mitrovic (7/9) Jan 12 2011 I'm saying that if you buy a cheap video card *today* you might not • retard (12/24) Jan 12 2011 There's no reason why they would break. Few months ago I was • Jonathan M Davis (12/39) Jan 12 2011 It depends on a number of factors, including the quality of the card and... • retard (30/43) Jan 12 2011 Modern GPU and CPU parts are of course getting hotter and hotter. They'r... • Jeff Nowakowski (5/6) Jan 12 2011 I recently had a cheap video card break. It at least had the decency to • Ulrik Mikaelsson (15/25) Jan 12 2011 Wow. The thread that went "Moving to D"->"Problems with • retard (2/31) Jan 12 2011 Nicely written, I fully agree with you. • Andrei Alexandrescu (8/39) Jan 12 2011 Same here. It's not well understood that heating/cooling cycles with the... • Walter Bright (8/20) Jan 12 2011 I paid my way through college hand-making electronics boards for profess... • Eric Poggel (2/4) Jan 28 2011 Oddly enough, milk has the same behavior. • Daniel Gibson (12/24) Jan 12 2011 I guess a recent version of the free drivers (as delivered with recent • retard (22/44) Jan 12 2011 Most likely. After all they're fixing more bugs than creating new • Daniel Gibson (12/27) Jan 06 2011 It's not SVN but trac doing this. • Nick Sabalausky (3/37) Jan 06 2011 DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd • Daniel Gibson (5/48) Jan 06 2011 http://www.dsource.org/projects/ddmd/changeset?new=rt%40185%3A13cf8da225... • Lutger Blijdestijn (3/54) Jan 06 2011 This works: http://www.dsource.org/projects/ddmd/changeset/183:190ba9827... • Andrei Alexandrescu (9/10) Jan 06 2011 The ready availability of Mercurial on dsource.org plus Don's • Brad Roberts (15/29) Jan 06 2011 Personally, I'd prefer to git over mecurial, which dsource also supports... • Andrej Mitrovic (4/4) Jan 06 2011 I've ever only used hg (mercurial), but only for some private • Nick Sabalausky (15/19) Jan 08 2011 I have to comment on this part: • Ulrik Mikaelsson (6/11) Jan 08 2011 Of course. You're in conflict with the only hardly-functional • Vladimir Panteleev (8/10) Jan 06 2011 Not sure about Hg, but in Git you can solve this by simply manually • Russel Winder (25/33) Jan 07 2011 Any repository coming to DVCS from CVS or Subversion will have much • bearophile (5/9) Jan 06 2011 Probably both Mercurial and Git are a little improvement over the curren... • David Nadlinger (34/35) Jan 06 2011 Personally, I'd really like to persuade Walter, you, and whoever else • Nick Sabalausky (4/26) Jan 06 2011 I've never used github, but I have used bitbucket and I truly, truly hat... • Jesse Phillips (2/4) Jan 06 2011 I've never really used bitbucket, but I don't know how it could be any w... • Nick Sabalausky (17/23) Jan 07 2011 The features in DSource generally *just work* (except when the whole ser... • Jesse Phillips (3/22) Jan 07 2011 Oh, yeah that would be annoying. I haven't done much with the github web... • Bruno Medeiros (15/30) Jan 28 2011 I have to agree and reiterate this point. The issue of whether it is • Daniel Gibson (2/35) Jan 28 2011 D has already moved to github, see D.announce :) • Bruno Medeiros (8/43) Jan 28 2011 I know, I know. :) (I am up-to-date on D.announce, just not on "D" and • retard (5/11) Jan 28 2011 You don't need to read every post here. Reading every bug report is just... • Bruno Medeiros (10/21) Feb 01 2011 I don't read every bug report, I only (try to) read the titles and see • Jacob Carlborg (5/48) Jan 08 2011 I've been using Mercurial for all my projects on dsource and some other • Eric Poggel (3/6) Jan 06 2011 I stumbpled across this url the other day: http://hg.dsource.org/ • Robert Clipsham (14/21) Jan 06 2011 That's Trac, not SVN doing it - all other version control systems do a • Vladimir Panteleev (13/15) Jan 06 2011 Walter, if you do make the move to git (or in generally switch DVCSes), ... • Russel Winder (34/47) Jan 07 2011 That surprises me. Shifting from Subversion to any of Mercurial, Bazaar • Walter Bright (5/25) Jan 07 2011 Because meld makes it easy to review, selectively merge, and do a bit of... • Russel Winder (16/18) Jan 07 2011 r each=20 • Jonathan M Davis (17/24) Jan 07 2011 Part of that was probably because not that many people pay attention to ... • Lars T. Kyllingstad (4/16) Jan 07 2011 I proposed the same on the Phobos list in May, but the discussion went • Don (15/40) Jan 06 2011 There's no difference if you're only making one patch, but once you make... • Andrei Alexandrescu (4/44) Jan 06 2011 What are the advantages of Mercurial over git? (git does allow multiple • bioinfornatics (3/3) Jan 06 2011 i have used svn, cvs a little, mercurial and git and i prefer git for me... • =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= (18/21) Jan 06 2011 Here's a comparison. Although I am partial to Mercurial, I have • David Nadlinger (21/27) Jan 06 2011 Jérôme, I'm usually not the one arguing ad hominem, but are you sure • Bruno Medeiros (24/28) Jan 28 2011 I've also been mulling over whether to try out and switch away from • Michel Fortin (13/36) Jan 28 2011 Git doesn't care how you move your files around. It track files by • =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= (15/51) Jan 29 2011 s • Bruno Medeiros (36/80) Feb 01 2011 Indeed, that's want I found out now that I tried Mercurial. So that's • David Nadlinger (4/7) Feb 01 2011 With Git, you could use submodules for that task – I don't know if • foobar (7/93) Feb 01 2011 You raised a valid concern regarding the local copy issue and it has alr... • Walter Bright (3/7) Feb 01 2011 I still find myself worrying about disk usage, despite being able to get... • Jonathan M Davis (11/20) Feb 01 2011 And some things will likely _always_ make disk usage a concern. Video wo... • Brad Roberts (15/23) Feb 01 2011 For what it's worth, the sizes of the key git dirs on my box: • Walter Bright (3/4) Feb 01 2011 Yeah, and I caught myself worrying about the disk usage from having two ... • Andrej Mitrovic (12/12) Feb 01 2011 Bleh. I tried to use Git to update some of the doc files, but getting • Walter Bright (4/5) Feb 01 2011 Git is a Linux program and will never work right on Windows. The problem... • Andrej Mitrovic (5/12) Feb 01 2011 Yeah, I know what you mean. "Use my app on Windows too, it works! But • Walter Bright (6/8) Feb 01 2011 Microemacs floated around the intarnets for free back in the 80's, and I... • Brad Roberts (4/20) Feb 01 2011 Of course, it forms a nice vicious circle. Without users, there's littl... • Andrej Mitrovic (4/4) Feb 01 2011 On 2/2/11, Walter Bright wrote: • Andrej Mitrovic (2/5) Feb 01 2011 Crap.. I just made a 2-dimensional book list by accident. My bad. • David Nadlinger (16/20) Feb 02 2011 If you are new to Git or SSH, the folks at GitHub have put up a tutorial... • =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= (7/23) Feb 02 2011 Why do you think I keep arguing against Git every chance I get? • Brad Roberts (2/18) Feb 01 2011 I use cygwin for all my windows work (which I try to keep to a minimum).... • Bruno Medeiros (35/43) Feb 04 2011 Well, like I said, my concern about size is not so much disk space, but • Michel Fortin (8/9) Feb 04 2011 Yes, it's called a shallow clone. See the --depth switch of git clone: • Bruno Medeiros (11/16) Feb 09 2011 I was about to say "Cool!", but then I checked the doc on that link and • Michel Fortin (14/31) Feb 09 2011 Actually, pushing from a shallow repository can work, but if your • nedbrek (7/32) Feb 10 2011 Hello all, • Bruno Medeiros (18/45) Feb 11 2011 Interesting. But it still feels very much like a second-class • Michel Fortin (12/40) Feb 11 2011 Interesting. • Bruno Medeiros (16/21) Feb 16 2011 That stuff about DVCS not having a central repository is another thing • Walter Bright (3/7) Feb 11 2011 I found I can't code on my laptop anymore; I am too used to and needful ... • Bruno Medeiros (5/11) Feb 16 2011 Yeah, that was my point as well. The laptop monitor is too small for • Ulrik Mikaelsson (52/66) Feb 06 2011 I think the storage/bandwidth requirements of DVCS:s are very often • Vladimir Panteleev (45/47) Jan 06 2011 We've had a discussion in #d (IRC), and the general consensus there seem... • Travis Boucher (21/26) Jan 06 2011 Recently I have been using mercurial (bitbucket). I have used git • Michel Fortin (9/14) Jan 06 2011 Easy forking is nice, but it could be a problem in our case. The • Vladimir Panteleev (8/11) Jan 06 2011 I suggested elsewhere in this thread that the two must be separated firs... • Travis Boucher (6/14) Jan 06 2011 I agree, separating out the proprietary stuff has other interesting • Michel Fortin (12/22) Jan 06 2011 Which means that we need another solution for the backend, and if that • David Nadlinger (8/11) Jan 06 2011 Just to be sure: You did mean »together« as in »separate repositories... • Andrej Mitrovic (7/7) Jan 06 2011 I don't think git really needs MSYS? I mean I've just installed git • Vladimir Panteleev (7/9) Jan 06 2011 MSysGit comes with its own copy of MSys. It's pretty transparent to the ... • Andrej Mitrovic (6/12) Jan 06 2011 Aye, but I didn't download that one, I got the one on the top here: • Vladimir Panteleev (8/22) Jan 07 2011 Ah, that's interesting! Must be a recent change. So they finally rewrote... • Jacob Carlborg (4/18) Jan 08 2011 Ever heard of TortoiseGit: http://code.google.com/p/tortoisegit/ • Andrej Mitrovic (4/5) Jan 08 2011 I can't stand Turtoise projects. They install explorer shells and • Vladimir Panteleev (8/15) Jan 08 2011 Hmm, MSysGit comes with its own shell extension (GitCheetah), although • Nick Sabalausky (8/15) Jan 08 2011 You need to go into the "Icon Overlays" section of the settings and set ... • Nick Sabalausky (6/12) Jan 06 2011 That might not be too bad then if it's all packaged well. The main probl... • Jesse Phillips (2/19) Jan 06 2011 I am able to run git commands from powershell. I ran a single install pr... • Nick Sabalausky (7/30) Jan 07 2011 I just tried the msysgit installer that Andrej linked to. I didn't try t... • Nick Sabalausky (4/6) Jan 06 2011 I'd consider running under MSYS to be a *major* disadvantage. MSYS is ba... • Vladimir Panteleev (10/13) Jan 06 2011 Why? MSysGit works great here! I have absolutely no issues with it. It • Russel Winder (70/113) Jan 07 2011 =20 • Don (7/58) Jan 06 2011 Essentially political and practical rather than technical. • Lars T. Kyllingstad (8/21) Jan 07 2011 I don't think Git's SVN hostility is a problem in practice. AFAIK there... • Jonathan M Davis (13/38) Jan 07 2011 Well, you get the full commit history if you use git-svn to commit to an... • Lars T. Kyllingstad (6/36) Jan 07 2011 Here's a page that deals with importing an SVN repo in git: • Jesse Phillips (2/15) Jan 07 2011 You can have git-svn import the standard svn layout. This will than impo... • Walter Bright (3/6) Jan 07 2011 I've been using git on a couple small projects, and I find that I have t... • Nick Sabalausky (4/10) Jan 07 2011 When I installed msysgit I got Git entires added to explorer's right-cli... • Vladimir Panteleev (7/13) Jan 07 2011 Could you please elaborate? A lot of people are using Git on Windows • Walter Bright (2/15) Jan 07 2011 No download for Windows from the git site. • Jesse Phillips (5/21) Jan 07 2011 Direct: • Andrej Mitrovic (2/3) Jan 07 2011 There's a big Windows icon on the right: http://git-scm.com/ • David Nadlinger (4/5) Jan 07 2011 Are you deliberately trying to make yourself look ignorant? Guess what's... • David Nadlinger (7/9) Jan 07 2011 I just realized that this might have sounded a bit too harsh, there was • bearophile (4/6) Jan 07 2011 Being gentle and not offensive is Just Necessary [TM] in a newsgroup lik... • Walter Bright (2/7) Jan 07 2011 So it is. The last time I looked, it wasn't there. • David Nadlinger (8/13) Jan 08 2011 By the way, I just stumbled upon this page presenting arguments in favor... • Gour (15/16) Jan 07 2011 Andrei> What are the advantages of Mercurial over git? (git does allow • Lutger Blijdestijn (7/28) Jan 06 2011 There isn't because it is basically the same workflow. The reason why pe... • Russel Winder (18/25) Jan 06 2011 Whilst I concur (massively) that Subversion is no longer the correct • Jesse Phillips (4/10) Jan 06 2011 First I think one must be convinced to move. Then that using a social si... • Jacob Carlborg (13/33) Jan 06 2011 That is very understandable. • Caligo (17/30) Jan 06 2011 BitBucket has copied almost everything from Github, and I don't understa... • Daniel Gibson (6/26) Jan 06 2011 Yeah, see also: http://schacon.github.com/bitbucket.html by the same aut... • Caligo (4/10) Jan 06 2011 hmmm...Interesting! I did not know that, and thanks for the share. The... • Nick Sabalausky (7/10) Jan 06 2011 That page looks like the VCS equivalent of taking pictures of sandwiches... • Lutger Blijdestijn (5/19) Jan 07 2011 Really? When I first visited bitbucket, I though this was from the maker... • Lutger Blijdestijn (3/24) Jan 07 2011 To be clear: not that I care much, good ideas should be copied (or, from... • Russel Winder (57/69) Jan 07 2011 On Thu, 2011-01-06 at 20:20 -0600, Caligo wrote: • Jacob Carlborg (6/17) Jan 06 2011 So what are you saying here? That I should fork druntime and apply the • bearophile (4/7) Jan 06 2011 See my more recent post for some answer. I think changing how DMD source... • Walter Bright (9/37) Jan 05 2011 The benchmarks you posted where it was supposedly slower in integer math... • Steven Schveighoffer (12/20) Jan 05 2011 In practice, it turns out D's GC is pretty bad performance-wise. Avoidi... • bearophile (5/5) Jan 05 2011 For people interested in do-it-yourself regarding benchmarking D, there ... • Long Chang (17/64) Jan 05 2011 I using D for 3 years . I am not in newsgroup because my English is very • Nick Sabalausky (5/93) Jan 05 2011 I'd say D is more like an above-average teen. Sure, they're young and • Jesse Phillips (12/17) Jan 06 2011 I actually founds some D repositories at github, not really up-to-date: Adrian Mercieca <amercieca gmail.com> writes: Hi everyone, I am currently mulling if I should be adopting D as my (and subsequently my company's) language of choice. We have great experience/investment in C++, so D seems - from what I've seen so far - as the logical step; D seems to me to be as C++ done right. I'm also looking at Go in the process, but Go seems to be more of a 'from C' progression, whilst D seems to be the 'from C++' progression. I am only worried about 2 things though - which I've read on the net: 1. No 64 bit compiler 2. The Phobos vs Tango issue: is this resolved now? This issue represents a major stumbling block for me. Any comments would be greatly appreciated. Thanks. Jan 02 2011 bearophile <bearophileHUGS lycos.com> writes: Adrian Mercieca: Welcome here. We have great experience/investment in C++, so D seems - from what I've seen so far - as the logical step; Maybe. D seems to me to be as C++ done right. "C++ done right" was one of the main purposes for D design :-) I'm also looking at Go in the process, but Go seems to be more of a 'from C' progression, whilst D seems to be the 'from C++' progression. Go and D are quite different. You will probably need a short time to find what do you need more among the two. There is also C# Mono. I am only worried about 2 things though - which I've read on the net: There are other things to be worried about :-) 1. No 64 bit compiler It's in development for Linux. It will come, it already compiles some code. 2. The Phobos vs Tango issue: is this resolved now? This issue represents a major stumbling block for me. The Phobos vs Tango issue is essentially a D1 issue. If you are interested in D2 then Phobos is going to be good enough. Bye, bearophile Jan 02 2011 Walter Bright <newshound2 digitalmars.com> writes: Adrian Mercieca wrote: 1. No 64 bit compiler The 64 bit dmd compiler (for Linux) is nearing alpha stage. 2. The Phobos vs Tango issue: is this resolved now? This issue represents a major stumbling block for me. Tango does not exist for D2. Jan 02 2011 Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes: Adrian Mercieca wrote: Hi everyone, I am currently mulling if I should be adopting D as my (and subsequently my company's) language of choice. We have great experience/investment in C++, so D seems - from what I've seen so far - as the logical step; D seems to me to be as C++ done right. I'm also looking at Go in the process, but Go seems to be more of a 'from C' progression, whilst D seems to be the 'from C++' progression. I am only worried about 2 things though - which I've read on the net: 1. No 64 bit compiler 2. The Phobos vs Tango issue: is this resolved now? This issue represents a major stumbling block for me. Any comments would be greatly appreciated. Thanks. 64 bit support is the main focus of dmd development at the moment. I take it that you would first evaluate D for a while, possibly 64-bit support will arrive when you are ready and need it. gdc development is also going strong. As for tango vs phobos the situation is now that most of development in the previous version of D (released circa 2007 iirc) is done with Tango. There is also a fine 64-bit compiler for D1, LDC. The feature set of D1 is frozen and significant (some backwards incompatible) changes have been made since. There isn't any sign that Tango will be ported to D2 and phobos is shaping up to be a fine library for D2. Some parts of phobos are still in flux, though other parts are more stable. Perhaps you'll find this thread about experiences with D worth a read: http://thread.gmane.org/gmane.comp.lang.d.general/45993 Jan 02 2011 bioinfornatics <bioinfornatics fedoraproject.org> writes: LDC exist for D2: https://bitbucket.org/prokhin_alexey/ldc2 Same for tango a port to D2 exist, the job is not done: git clone git://supraverse.net/tango.git any help are welcome Jan 02 2011 Adrian Mercieca <amercieca gmail.com> writes: On Sun, 02 Jan 2011 11:21:38 +0000, bioinfornatics wrote: LDC exist for D2: https://bitbucket.org/prokhin_alexey/ldc2 Same for tango a port to D2 exist, the job is not done: git clone git://supraverse.net/tango.git any help are welcome Geez! that was quick! I see that the community is very, very alive. Ok - that clears the issues re 64bit and Phobos vs Tango; Guess Phobos is the way to go with D2. Thanks a lot for your responses - very much appreciated. - Adrian. Jan 02 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/2/11 6:44 AM, Adrian Mercieca wrote: On Sun, 02 Jan 2011 11:21:38 +0000, bioinfornatics wrote: LDC exist for D2: https://bitbucket.org/prokhin_alexey/ldc2 Same for tango a port to D2 exist, the job is not done: git clone git://supraverse.net/tango.git any help are welcome Geez! that was quick! I see that the community is very, very alive. Ok - that clears the issues re 64bit and Phobos vs Tango; Guess Phobos is the way to go with D2. Thanks a lot for your responses - very much appreciated. - Adrian. I also recommend reading Adam Ruppe's recent posts. His tips on getting great work done in D in spite of its implementation's current imperfections are very valuable. Andrei Jan 02 2011 "Nick Sabalausky" <a a.a> writes: "Adrian Mercieca" <amercieca gmail.com> wrote in message news:ifpj8l$lnm$1 digitalmars.com... Hi everyone, I am currently mulling if I should be adopting D as my (and subsequently my company's) language of choice. We have great experience/investment in C++, so D seems - from what I've seen so far - as the logical step; D seems to me to be as C++ done right. I'm also looking at Go in the process, but Go seems to be more of a 'from C' progression, whilst D seems to be the 'from C++' progression. Personally, I love D and can't stand Go (the lack of exceptions, generics, metaprogramming and decent memory-access are deal-breakers for me, and overall it seems like a one-trick pony - it has the interesting goroutines and that's about it). But since this is the D newsgroup you can probably expect we'll be bigger D fans here ;) I am only worried about 2 things though - which I've read on the net: 1. No 64 bit compiler 64-bit code generation is on the way and is Walter's top priority. In the meantime, I would recommend taking a good look at whether it really is necessary for your company's software. Certainly there are many things that benefit greatly from 64-bit, but even as "in-vogue" as 64-bit is, most things don't actually *need* it. And there are still plenty of times when 64-bit won't even make any real difference anyway. But regardless, 64-bit is absolutely on the way and is very high priority. In fact, AIUI, the basic "Hello World" has been working for quite some time now. 2. The Phobos vs Tango issue: is this resolved now? This issue represents a major stumbling block for me. If you use D2, there is no Tango. Just Phobos. And there are no plans for Tango to move to D2. If you use D1, Tango is really the "de facto" std lib because D1's Phobos is extremely minimal. (D1's Phobos was created way back before there was a real Phobos development team and Walter had to divide his time between language and library, and language was of course the higher priority.) So no, it's really not the issue it's been made out to be. Jan 02 2011 bioinfornatics <bioinfornatics fedoraproject.org> writes: they are a D2 port for tango. It is not done. take source here: git clone git://supraverse.net/tango.git The job is almost done. everyone can do this job. Take a D2 compiler build and fix error Jan 02 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Sun, 02 Jan 2011 20:34:40 +0200, bioinfornatics <bioinfornatics fedoraproject.org> wrote: they are a D2 port for tango. It is not done. take source here: git clone git://supraverse.net/tango.git The job is almost done. everyone can do this job. Take a D2 compiler build and fix error How many people are working on this port? How many people will be interested in using it, considering that a direct port won't use many of D2's features (why not just use D1)? Will this port be around in 1 year? 5 years? Will it have the same kind of momentum as the original D1 version, with as many developers working on it, fixing bugs etc.? Will the API always stay in sync with the developments in the original D1 version? What about all the existing documentation, tutorials, even book(s)? Sorry, having more options is a good thing, but I think there is a lot more to a real "Tango for D2" than just someone fixing the code so it compiles and works. -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 03 2011 Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes: How many people are working on this port? How many people will be interested in using it, considering that a direct port won't use many of D2's features (why not just use D1)? Will this port be around in 1 year? 5 years? Will it have the same kind of momentum as the original D1 version, with as many developers working on it, fixing bugs etc.? Will the API always stay in sync with the developments in the original D1 version? What about all the existing documentation, tutorials, even book(s)? There aren't a lot of additions to D1 Tango nowadays, partly because people seems to have other things to do, partly because most of it works pretty nice already. That said, I think the D2-version of Tango will be a one-time fork. Regarding how many people are working on the D2-fork, I think it's quite few (AFAICT only Marenz). The general consensus in Tango have been to wait on D2 to be "finalized" before investing effort into porting. Sorry, having more options is a good thing, but I think there is a lot more to a real "Tango for D2" than just someone fixing the code so it compiles and works. Agreed, but it doesn't all have to happen at day1. Just being able to port Tango-apps over to D2 with minimal fuzz would is valuable in itself. Anyways, IMHO I think one of the most important advances in D2, is the separation of runtime from system-library, such that Phobos and Tango can co-exist more easily, reducing fragmentation. Jan 03 2011 Trass3r <un known.com> writes: Agreed, but it doesn't all have to happen at day1. Just being able to port Tango-apps over to D2 with minimal fuzz would is valuable in itself. Anyways, IMHO I think one of the most important advances in D2, is the separation of runtime from system-library, such that Phobos and Tango can co-exist more easily, reducing fragmentation. So true. Jan 03 2011 Adrian Mercieca <amercieca gmail.com> writes: Hi, One other question.... How does D square up, performance-wise, to C and C++ ? Has anyone got any benchmark figures? How does D compare in this area? Also, is D more of a Windows oriented language? Do the Linux and OSX versions get as much attention as the Windows one? Thanks. Adrian. On Sun, 02 Jan 2011 10:15:49 +0000, Adrian Mercieca wrote: Hi everyone, I am currently mulling if I should be adopting D as my (and subsequently my company's) language of choice. We have great experience/investment in C++, so D seems - from what I've seen so far - as the logical step; D seems to me to be as C++ done right. I'm also looking at Go in the process, but Go seems to be more of a 'from C' progression, whilst D seems to be the 'from C++' progression. I am only worried about 2 things though - which I've read on the net: 1. No 64 bit compiler 2. The Phobos vs Tango issue: is this resolved now? This issue represents a major stumbling block for me. Any comments would be greatly appreciated. Thanks. Jan 04 2011 bearophile <bearophileHUGS lycos.com> writes: Adrian Mercieca: How does D square up, performance-wise, to C and C++ ? Has anyone got any benchmark figures? DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 bit version will use 8 or more SSE registers), and sometimes it's slower for integer programs too. I've seen DMD programs slow down if you nest two foreach inside each other. There is a collection of different slow microbenchmarks. But LDC1 is able to run D1 code that looks like C about equally fast as C or sometimes a bit faster. DMD2 uses thread local memory on default that in theory slows code down a bit if you use global data, but I have never seen a benchmark that shows this slowdown clearly (an there is __gshared too, but sometimes it seems a placebo). If you use higher level constructs your program will often go slower. Often one of the most important things for speed is memory management, D encourages to heap allocate a lot (class instances are usually on the heap), and this is very bad for performance, also because the built-in GC doesn't have an Eden generation managed as a stack. So if you want more performance you must program like in Pascal/Ada, stack-allocating a lot, or using memory pools, etc. It's a lot a matter of self-discipline while you program. Also, is D more of a Windows oriented language? Do the Linux and OSX versions get as much attention as the Windows one? The Windows version is receiving enough attention, it's not ignored by Walter. But I think for some time the 64 bit version will not be Windows too. Bye, bearophile Jan 05 2011 "Nick Sabalausky" <a a.a> writes: "bearophile" <bearophileHUGS lycos.com> wrote in message news:ig1d3l$kts$1 digitalmars.com... Adrian Mercieca: How does D square up, performance-wise, to C and C++ ? Has anyone got any benchmark figures? DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 bit version will use 8 or more SSE registers), and sometimes it's slower for integer programs too. I've seen DMD programs slow down if you nest two foreach inside each other. There is a collection of different slow microbenchmarks. But LDC1 is able to run D1 code that looks like C about equally fast as C or sometimes a bit faster. DMD2 uses thread local memory on default that in theory slows code down a bit if you use global data, but I have never seen a benchmark that shows this slowdown clearly (an there is __gshared too, but sometimes it seems a placebo). If you use higher level constructs your program will often go slower. Often one of the most important things for speed is memory management, D encourages to heap allocate a lot (class instances are usually on the heap), and this is very bad for performance, also because the built-in GC doesn't have an Eden generation managed as a stack. So if you want more performance you must program like in Pascal/Ada, stack-allocating a lot, or using memory pools, etc. It's a lot a matter of self-discipline while you program. OTOH, the design of D and Phobos2 strongly encourages fast techniques such as array slicing, pre-computation at compile-time, and appropriate use of things like caching and lazy evaluation. Many of these things probably can be done in C/C++, technically speaking, but D makes them far easier and more accessable, and thus more likely to actually get used. As an example, see how D's built-in array slicing helped Tango's XML lib beat the snot out of other language's fast-XML libs: http://dotnot.org/blog/archives/2008/03/12/why-is-dtango-so- ast-at-parsing-xml/ - and look at the two benchmarks the first paragraph links to. Also, is D more of a Windows oriented language? Do the Linux and OSX versions get as much attention as the Windows one? Linux, Windows and OSX are all strongly supported. Sometimes OSX might lag *slightly* in one thing or another, but that's only because there aren't nearly as many people using D on Mac and giving it a good workout. And even at that, it's still only gotten better since Walter got his own Mac box to test on. And Linux is maybe *slightly* ahead of even Windows because, like bearophile said, it'll get 64-bit support first, and also because the Linux DMD uses the standard Linux object-file format while Windows DMD is still using a fairly uncommon object-file format (but that only matters if you want to link object files from different compilers, and if you do want to, I think there are object file converters out there). But yea, overall, all of the big 3 OSes get plenty of attention. Jan 05 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/5/11, Nick Sabalausky <a a.a> wrote: And Linux is maybe *slightly* ahead of even Windows because, like bearophile said, it'll get 64-bit support first.. I wonder if the reason for that is Optlink (iirc it doesn't support 64bit even for DMC, right?). Jan 05 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Wednesday, January 05, 2011 09:59:08 Andrej Mitrovic wrote: On 1/5/11, Nick Sabalausky <a a.a> wrote: And Linux is maybe *slightly* ahead of even Windows because, like bearophile said, it'll get 64-bit support first.. I wonder if the reason for that is Optlink (iirc it doesn't support 64bit even for DMC, right?). I believe that it's that and the fact that apparenly 64-bit stuff or Windows is very different from 32-bit stuff, whereas on Linux, for the most part, it's the same. So, it's a much easier port. Of course, Walter would know the specifics on that better than I would. - Jonathan M Davis Jan 05 2011 Jacob Carlborg <doob me.com> writes: On 2011-01-05 18:37, Nick Sabalausky wrote: "bearophile"<bearophileHUGS lycos.com> wrote in message news:ig1d3l$kts$1 digitalmars.com... Adrian Mercieca: How does D square up, performance-wise, to C and C++ ? Has anyone got any benchmark figures? DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 bit version will use 8 or more SSE registers), and sometimes it's slower for integer programs too. I've seen DMD programs slow down if you nest two foreach inside each other. There is a collection of different slow microbenchmarks. But LDC1 is able to run D1 code that looks like C about equally fast as C or sometimes a bit faster. DMD2 uses thread local memory on default that in theory slows code down a bit if you use global data, but I have never seen a benchmark that shows this slowdown clearly (an there is __gshared too, but sometimes it seems a placebo). If you use higher level constructs your program will often go slower. Often one of the most important things for speed is memory management, D encourages to heap allocate a lot (class instances are usually on the heap), and this is very bad for performance, also because the built-in GC doesn't have an Eden generation managed as a stack. So if you want more performance you must program like in Pascal/Ada, stack-allocating a lot, or using memory pools, etc. It's a lot a matter of self-discipline while you program. OTOH, the design of D and Phobos2 strongly encourages fast techniques such as array slicing, pre-computation at compile-time, and appropriate use of things like caching and lazy evaluation. Many of these things probably can be done in C/C++, technically speaking, but D makes them far easier and more accessable, and thus more likely to actually get used. As an example, see how D's built-in array slicing helped Tango's XML lib beat the snot out of other language's fast-XML libs: http://dotnot.org/blog/archives/2008/03/12/why-is-dtango-so- ast-at-parsing-xml/ - and look at the two benchmarks the first paragraph links to. Also, is D more of a Windows oriented language? Do the Linux and OSX versions get as much attention as the Windows one? Linux, Windows and OSX are all strongly supported. Sometimes OSX might lag *slightly* in one thing or another, but that's only because there aren't nearly as many people using D on Mac and giving it a good workout. And even at that, it's still only gotten better since Walter got his own Mac box to test on. And Linux is maybe *slightly* ahead of even Windows because, like bearophile said, it'll get 64-bit support first, and also because the Linux DMD uses the standard Linux object-file format while Windows DMD is still using a fairly uncommon object-file format (but that only matters if you want to link object files from different compilers, and if you do want to, I think there are object file converters out there). But yea, overall, all of the big 3 OSes get plenty of attention. And sometimes Mac OS X is *slightly* ahead of the other OSes, Tango has had support for dynamic libraries on Mac OS X using DMD for quite a while now. For D2 a patch is just sitting there in bugzilla waiting for the last part of it to be commited. I'm really pushing this because people seem to forget this. -- /Jacob Carlborg Jan 05 2011 bearophile <bearophileHUGS lycos.com> writes: Jacob Carlborg: And sometimes Mac OS X is *slightly* ahead of the other OSes, Tango has had support for dynamic libraries on Mac OS X using DMD for quite a while now. For D2 a patch is just sitting there in bugzilla waiting for the last part of it to be commited. I'm really pushing this because people seem to forget this. A quotation from here: http://whatupdave.com/post/1170718843/leaving-net Also stop using codeplex it’s not real open source! Real open source isn’t submitting a patch and waiting/hoping that one day it might be accepted and merged into the main line.< Bye, bearophile Jan 05 2011 "Nick Sabalausky" <a a.a> writes: "bearophile" <bearophileHUGS lycos.com> wrote in message news:ig2oe8$eki$1 digitalmars.com... Jacob Carlborg: And sometimes Mac OS X is *slightly* ahead of the other OSes, Tango has had support for dynamic libraries on Mac OS X using DMD for quite a while now. For D2 a patch is just sitting there in bugzilla waiting for the last part of it to be commited. I'm really pushing this because people seem to forget this. A quotation from here: http://whatupdave.com/post/1170718843/leaving-net Also stop using codeplex it’s not real open source! Real open source isn’t submitting a patch and waiting/hoping that one day it might be accepted and merged into the main line.< Automatically accepting all submissions immediately into the main line with no review isn't a good thing either. In that article he's complaining about MS, but MS is notorious for ignoring all non-MS input, period. D's already light-years ahead of that. Since D's purely volunteer effort, and with a lot of things to be done, sometimes things *are* going to tale a while to get in. But there's just no way around that without major risks to quality. And yea Walter could grant main-line DMD commit access to others, but then we'd be left with a situation where no single lead dev understands the whole program inside and out - and when that happens to projects, that's inevitably the point where it starts to go downhill. Jan 05 2011 Walter Bright <newshound2 digitalmars.com> writes: Nick Sabalausky wrote: Automatically accepting all submissions immediately into the main line with no review isn't a good thing either. In that article he's complaining about MS, but MS is notorious for ignoring all non-MS input, period. D's already light-years ahead of that. Since D's purely volunteer effort, and with a lot of things to be done, sometimes things *are* going to tale a while to get in. But there's just no way around that without major risks to quality. And yea Walter could grant main-line DMD commit access to others, but then we'd be left with a situation where no single lead dev understands the whole program inside and out - and when that happens to projects, that's inevitably the point where it starts to go downhill. That's pretty much what I'm afraid of, losing my grip on how the whole thing works if there are multiple dmd committers. On the bright (!) side, Brad Roberts has gotten the test suite in shape so that anyone developing a patch can run it through the full test suite, which is a prerequisite to getting it folded in. In the last release, most of the patches in the changelog were done by people other than myself, although yes, I vet and double check them all before committing them. Jan 05 2011 Caligo <iteronvexor gmail.com> writes: On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright <newshound2 digitalmars.com>wrote: That's pretty much what I'm afraid of, losing my grip on how the whole thing works if there are multiple dmd committers. Perhaps using a modern SCM like Git might help? Everyone could have (and should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius Jan 06 2011 bearophile <bearophileHUGS lycos.com> writes: Nick Sabalausky: Automatically accepting all submissions immediately into the main line with no review isn't a good thing either.< I agree with all you have said, I was not suggesting a wild west :-) But maybe there are ways to improve the situation a little, I don't think the current situation is perfect. A better revision control system like Git or Mercury (they are not equal, but both are good enough) will be an improvement. ------------------ Caligo: Perhaps using a modern SCM like Git might help? Everyone could have (and should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius I agree. Such systems allow to find a middle point better than the current one between wild freedom and frozen proprietary control. Walter and few others are the only allowed to commit to the main trunk, so Walter has no risk in "losing grip on how the whole thing works", but freedom in submitting patches and creating branches allows people more experimentation, simpler review of patches and trunks, turning D/DMD in a more open source effort... So I suggest Walter to consider all this. Bye, bearophile Jan 06 2011 "Nick Sabalausky" <a a.a> writes: "Caligo" <iteronvexor gmail.com> wrote in message news:mailman.451.1294306555.4748.digitalmars-d puremagic.com... On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright <newshound2 digitalmars.com>wrote: That's pretty much what I'm afraid of, losing my grip on how the whole thing works if there are multiple dmd committers. Perhaps using a modern SCM like Git might help? Everyone could have (and should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch. Jan 06 2011 Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes: 2011/1/6 Nick Sabalausky <a a.a>: "Caligo" <iteronvexor gmail.com> wrote in message news:mailman.451.1294306555.4748.digitalmars-d puremagic.com... Perhaps using a modern SCM like Git might help? =C2=A0Everyone could hav= e (and should have) commit rights, and they would send pull requests. =C2=A0You= or one of the managers would then review the changes and pull and merge with th= e main branch. =C2=A0It works great; just checkout out Rubinius on Github = to see what I mean: https://github.com/evanphx/rubinius I'm not sure I see how that's any different from everyone having "create = and submit a patch" rights, and then having Walter or one of the managers rev= iew the changes and merge/patch with the main branch. With the risk of starting yet another VCS-flamewar: It gives the downstream developers an easier option to work on multiple patches in patch-sets. Many non-trivial changes are too big to do in a single step, but requires series of changes. Sure, the downstream hacker could maintain import/conversion to VCS, but with added job, and when Walter or someone else gets to review they are no longer well-annotated patches. It also facilitates a setup where Walter (BDFL? ;) starts to trust some contributors (if he wants to) more than others, for them to work on private branches and submit larger series of patches for each release. Especially, when you detect a showstopper bug that blocks your progress, IMHO, it's easier using a DVCS to maintain a local patch for the needed fix, until upstream includes it. I've often used that strategy both in D-related and other projects just to remain sane and work-around upstream bugs, I just usually have to jump through a some hoops getting the source into DVCS in the first place. I think it was on this list I saw the comparison of VCS:es to the Blub-problem? http://en.wikipedia.org/wiki/Blub#Blub Although, I don't think the current setup have any _serious_ problems, I think there might be slight advantages to gain. OTOH, unless other current key contributors wants to push it, it's probably not worth the cost of change. Jan 06 2011 Walter Bright <newshound2 digitalmars.com> writes: Nick Sabalausky wrote: "Caligo" <iteronvexor gmail.com> wrote in message news:mailman.451.1294306555.4748.digitalmars-d puremagic.com... On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright <newshound2 digitalmars.com>wrote: That's pretty much what I'm afraid of, losing my grip on how the whole thing works if there are multiple dmd committers. Perhaps using a modern SCM like Git might help? Everyone could have (and should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch. I don't, either. Jan 06 2011 bearophile <bearophileHUGS lycos.com> writes: Walter Bright: I don't, either. Then it's a very good moment for starting to seeing/understanding this and similar things! Bye, bearophile Jan 06 2011 Russel Winder <russel russel.org.uk> writes: On Thu, 2011-01-06 at 03:10 -0800, Walter Bright wrote: Nick Sabalausky wrote: "Caligo" <iteronvexor gmail.com> wrote in message=20 news:mailman.451.1294306555.4748.digitalmars-d puremagic.com... On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright <newshound2 digitalmars.com>wrote: That's pretty much what I'm afraid of, losing my grip on how the whol= e thing works if there are multiple dmd committers. Perhaps using a modern SCM like Git might help? Everyone could have = (and should have) commit rights, and they would send pull requests. You or= one of the managers would then review the changes and pull and merge with = the main branch. It works great; just checkout out Rubinius on Github to = see what I mean: https://github.com/evanphx/rubinius =20 I'm not sure I see how that's any different from everyone having "creat= e and=20 submit a patch" rights, and then having Walter or one of the managers r= eview=20 the changes and merge/patch with the main branch. =20 I don't, either. Pity, because using one of Mercurial, Bazaar or Git instead of Subversion is likely the best and fastest way of getting more quality contributions to review. Although only anecdotal in every case where a team has switched to DVCS from CVCS -- except in the case of closed projects, obviously -- it has opened things up to far more people to provide contributions. Subversion is probably now the single biggest barrier to getting input on system evolution. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 06 2011 Walter Bright <newshound2 digitalmars.com> writes: Russel Winder wrote: Pity, because using one of Mercurial, Bazaar or Git instead of Subversion is likely the best and fastest way of getting more quality contributions to review. Although only anecdotal in every case where a team has switched to DVCS from CVCS -- except in the case of closed projects, obviously -- it has opened things up to far more people to provide contributions. Subversion is probably now the single biggest barrier to getting input on system evolution. A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. One thing I would dearly like is to be able to merge branches using meld. http://meld.sourceforge.net/ Jan 06 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Walter Bright Wrote: Russel Winder wrote: Pity, because using one of Mercurial, Bazaar or Git instead of Subversion is likely the best and fastest way of getting more quality contributions to review. Although only anecdotal in every case where a team has switched to DVCS from CVCS -- except in the case of closed projects, obviously -- it has opened things up to far more people to provide contributions. Subversion is probably now the single biggest barrier to getting input on system evolution. A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 You mean this: https://github.com/braddr/dmd/commit/f1fde96227394f926da5841db4f0f4c608b2e7b2 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. One thing I would dearly like is to be able to merge branches using meld. http://meld.sourceforge.net/ Git does not have its own merge tool. You are free to use meld. Though there is gitmerge which can run meld as the merge tool. Jan 06 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Jesse Phillips Wrote: where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. One thing I would dearly like is to be able to merge branches using meld. http://meld.sourceforge.net/ Git does not have its own merge tool. You are free to use meld. Though there is gitmerge which can run meld as the merge tool. Just realized you probably meant more than just resolving conflicts. And what you might be interested in git cherry picking. I haven't done it myself and don't know if meld could be used for it. Jan 06 2011 Michel Fortin <michel.fortin michelf.com> writes: On 2011-01-06 15:01:18 -0500, Jesse Phillips <jessekphillips+D gmail.com> said: Walter Bright Wrote: A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. I probably wasn't on the list at the time. I'm certainly interested, it'd certainly make it easier for me, as I'm using git locally to access that repo. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 You mean this: https://github.com/braddr/dmd/commit/f1fde96227394f926da5841db4f0f4c608b2e7b2 That's only if you're hosted on github. If you install on your own server, git comes with a web interface that looks like this (pointing to a specific diff): <http://repo.or.cz/w/LinuxKernelDevelopmentProcess.git/commitdiff/d7214dcb5be988a5c7d407f907c7e7e789872d24> Also when I want an overview with git I just type gitk on the command line to bring a window where I can browser the graph of forks, merges and commits and see the diff for each commit. Here's what gitk looks like: <http://michael-prokop.at/blog/img/gitk.png> where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. One thing I would dearly like is to be able to merge branches using meld. http://meld.sourceforge.net/ Git does not have its own merge tool. You are free to use meld. Though there is gitmerge which can run meld as the merge tool. Looks like meld itself used git as it's repository. I'd be surprised if it doesn't work with git. :-) -- Michel Fortin michel.fortin michelf.com http://michelf.com/ Jan 06 2011 Walter Bright <newshound2 digitalmars.com> writes: Michel Fortin wrote: On 2011-01-06 15:01:18 -0500, Jesse Phillips <jessekphillips+D gmail.com> said: Walter Bright Wrote: A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. I probably wasn't on the list at the time. I'm certainly interested, it'd certainly make it easier for me, as I'm using git locally to access that repo. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 You mean this: https://github.com/braddr/dmd/commit/f1fde96227394f926da5841db4f0f4c608b2e7b2 That's only if you're hosted on github. If you install on your own server, git comes with a web interface that looks like this (pointing to a specific diff): <http://repo.or.cz/w/LinuxKernelDevelopmentProcess.git/commitdiff/d7214dcb5be988a5c7d40 f907c7e7e789872d24> Eh, that's inferior. The svn will will highlight what part of a line is different, rather than just the whole line. Looks like meld itself used git as it's repository. I'd be surprised if it doesn't work with git. :-) I use git for other projects, and meld doesn't work with it. Jan 06 2011 Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes: Walter Bright wrote: Looks like meld itself used git as it's repository. I'd be surprised if it doesn't work with git. :-) I use git for other projects, and meld doesn't work with it. What version are you on? I'm using 1.3.2 and its supports git and mercurial (also committing from inside meld & stuff, I take it this is what you mean with supporting a vcs). Jan 08 2011 Walter Bright <newshound2 digitalmars.com> writes: Lutger Blijdestijn wrote: Walter Bright wrote: Looks like meld itself used git as it's repository. I'd be surprised if it doesn't work with git. :-) I use git for other projects, and meld doesn't work with it. What version are you on? I'm using 1.3.2 and its supports git and mercurial (also committing from inside meld & stuff, I take it this is what you mean with supporting a vcs). The one that comes with: sudo apt-get meld 1.1.5.1 Jan 08 2011 Michel Fortin <michel.fortin michelf.com> writes: On 2011-01-08 15:36:39 -0500, Walter Bright <newshound2 digitalmars.com> said: Lutger Blijdestijn wrote: Walter Bright wrote: Looks like meld itself used git as it's repository. I'd be surprised if it doesn't work with git. :-) I use git for other projects, and meld doesn't work with it. What version are you on? I'm using 1.3.2 and its supports git and mercurial (also committing from inside meld & stuff, I take it this is what you mean with supporting a vcs). The one that comes with: sudo apt-get meld 1.1.5.1 I know you had your reasons, but perhaps it's time for you upgrade to a more recent version of Ubuntu? That version is what comes with Hardy Heron (april 2008). <https://launchpad.net/ubuntu/+source/meld> Or you could download the latest version from meld's website and compile it yourself. -- Michel Fortin michel.fortin michelf.com http://michelf.com/ Jan 08 2011 Walter Bright <newshound2 digitalmars.com> writes: Michel Fortin wrote: I know you had your reasons, but perhaps it's time for you upgrade to a more recent version of Ubuntu? That version is what comes with Hardy Heron (april 2008). <https://launchpad.net/ubuntu/+source/meld> I know. The last time I upgraded Ubuntu in place it f****d up my system so bad I had to wipe the disk and start all over. It still won't play videos correctly (the previous Ubuntu worked fine), the rhythmbox music player never worked again, it wiped out all my virtual boxes, I had to spend hours googling around trying to figure out how to reconfigure the display driver so the monitor worked again, etc. I learned my lesson! Yes, I'll eventually upgrade, but I'm not looking forward to it. Or you could download the latest version from meld's website and compile it yourself. Yeah, I could spend an afternoon doing that. Jan 08 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Yeah, I could spend an afternoon doing that. sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) (Yes, I just tested it on a Ubuntu install, albeit 10.10. No, no ./configure needed. For anyone else who tries this and didn't already have meld, you may need to apt-get install python-gtk2 manually.) -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 08 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/9/11, Vladimir Panteleev <vladimir thecybershadow.net> wrote: On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Yeah, I could spend an afternoon doing that. sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) (Yes, I just tested it on a Ubuntu install, albeit 10.10. No, no ./configure needed. For anyone else who tries this and didn't already have meld, you may need to apt-get install python-gtk2 manually.) -- Best regards, Vladimir mailto:vladimir thecybershadow.net Now do it on Windows!! Now that *would* probably take an afternoon. Jan 08 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Sun, 09 Jan 2011 02:34:42 +0200, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote: Now do it on Windows!! Now that *would* probably take an afternoon. Done! Just had to install PyGTK. (Luckily for me, meld is written in Python, so there was no need to mess with MinGW :P) From taking a quick look, I don't see meld's advantage over WinMerge (other than being cross-platform). -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 08 2011 Walter Bright <newshound2 digitalmars.com> writes: Vladimir Panteleev wrote: From taking a quick look, I don't see meld's advantage over WinMerge (other than being cross-platform). Thanks for pointing me at winmerge. I've been looking for one to work on Windows. Jan 08 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Sun, 09 Jan 2011 04:17:21 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Vladimir Panteleev wrote: From taking a quick look, I don't see meld's advantage over WinMerge (other than being cross-platform). Thanks for pointing me at winmerge. I've been looking for one to work on Windows. Actually, I just noticed that WinMerge doesn't have three-way merge (in all instances when I needed it my SCM launched TortoiseMerge). That's probably a show-stopper for you. -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 08 2011 =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes: Walter Bright wrote: Vladimir Panteleev wrote: From taking a quick look, I don't see meld's advantage over WinMerge (other than being cross-platform). =20 Thanks for pointing me at winmerge. I've been looking for one to work o= n Windows. I personally use kdiff3 [1] both on Linux and Windows. Jerome [1] http://kdiff3.sourceforge.net/ --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr Jan 09 2011 "Nick Sabalausky" <a a.a> writes: "Walter Bright" <newshound2 digitalmars.com> wrote in message news:igb5uo$26af$1 digitalmars.com... Vladimir Panteleev wrote: From taking a quick look, I don't see meld's advantage over WinMerge (other than being cross-platform). Thanks for pointing me at winmerge. I've been looking for one to work on Windows. Beyond Compare and Ultra Compare Jan 11 2011 Walter Bright <newshound2 digitalmars.com> writes: Vladimir Panteleev wrote: On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Yeah, I could spend an afternoon doing that. sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) Thanks, I'll give it a try! Jan 08 2011 Christopher Nicholson-Sauls <ibisbasenji gmail.com> writes: On 01/08/11 20:18, Walter Bright wrote: Vladimir Panteleev wrote: On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Yeah, I could spend an afternoon doing that. sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) Thanks, I'll give it a try! I say you should consider moving away from *Ubuntu and to something more "developer-friendly" such as Gentoo, where the command to install meld is just: emerge meld ...done. And yes, that's an install from source. I just did it myself, and it took right at one minute. -- Chris N-S Jan 09 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Sunday 09 January 2011 04:00:21 Christopher Nicholson-Sauls wrote: On 01/08/11 20:18, Walter Bright wrote: Vladimir Panteleev wrote: On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Yeah, I could spend an afternoon doing that. sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) Thanks, I'll give it a try! I say you should consider moving away from *Ubuntu and to something more "developer-friendly" such as Gentoo, where the command to install meld is just: emerge meld ...done. And yes, that's an install from source. I just did it myself, and it took right at one minute. Yeah well, much as I like gentoo, if he didn't like dealing with the pain of an Ubuntu upgrade messing with his machine, I doubt that he'll be enamoured with having to keep figuring out how to fix his machine because one of the builds didn't work on an update in Gentoo. Gentoo definitely has some great stuff going for it, but you have to be willing to deal with fixing your machine on a semi- regular basis. Personally, I got sick of it and moved on. Currently, I use Arch, which is _way_ more friendly for building non-repo packages yourself or otherwise messing repo packages. You _can_ choose to build from source but don't _have_ to, and you get a rolling release like you effectively get with Gentoo. So, I'm much happier with Arch than I was with Gentoo. But regardless, there's no need to start an argument over distros. They all have their pros and cons, and everyone is going to prefer one over another. Still, Gentoo is one of those distros where you have to expect to work at maintaining your machine, whereas Ubuntu really isn't. So, I wouldn't normally recommend Gentoo to someone who's using Ubuntu unless they're specifically looking for something like Gentoo. - Jonathan M Davis Jan 09 2011 Gour <gour atmarama.net> writes: On Sun, 9 Jan 2011 04:15:07 -0800 "Jonathan" =3D=3D <jmdavisProg gmx.com> wrote: Jonathan> Personally, I got sick of it and moved on. Currently, I use Jonathan> Arch, which is _way_ more friendly for building non-repo Jonathan> packages yourself or otherwise messing repo packages. You Jonathan> _can_ choose to build from source but don't _have_ to, and Jonathan> you get a rolling release like you effectively get with Jonathan> Gentoo. So, I'm much happier with Arch than I was with Jonathan> Gentoo. +1 (after spending 5yrs with Gentoo...and never looked back) Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ---------------------------------------------------------------- Jan 09 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: I'm keeping my eye on BeyondCompare. But it's not free. It's$80 for the dual platform Linux+Windows and the Pro version which features 3-way merge. It's customization options are great though. There's a trial version over at http://www.scootersoftware.com/ if you want to give it a spin. Jan 09 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/9/11, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote: I'm keeping my eye on BeyondCompare. But it's not free. It's $80 for the dual platform Linux+Windows and the Pro version which features 3-way merge. It's customization options are great though. There's a trial version over at http://www.scootersoftware.com/ if you want to give it a spin. There's at least one caveat though: it doesn't natively support D files. So the best thing to do is add *.d and *.di as file masks for its C++ parser. Jan 09 2011 retard <re tard.com.invalid> writes: Sun, 09 Jan 2011 06:00:21 -0600, Christopher Nicholson-Sauls wrote: On 01/08/11 20:18, Walter Bright wrote: Vladimir Panteleev wrote: On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Yeah, I could spend an afternoon doing that. sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) Thanks, I'll give it a try! I say you should consider moving away from *Ubuntu and to something more "developer-friendly" such as Gentoo, where the command to install meld is just: emerge meld ...done. And yes, that's an install from source. I just did it myself, and it took right at one minute. Gentoo really needs a high-end computer to run fast. FWIW, the same meld takes 7 seconds to install on my ubuntu. That includes fetching the package from the internet (1-2 seconds). Probably even faster on Arch. Jan 10 2011 Christopher Nicholson-Sauls <ibisbasenji gmail.com> writes: On 01/10/11 21:14, retard wrote: Sun, 09 Jan 2011 06:00:21 -0600, Christopher Nicholson-Sauls wrote: On 01/08/11 20:18, Walter Bright wrote: Vladimir Panteleev wrote: On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Yeah, I could spend an afternoon doing that. sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) Thanks, I'll give it a try! I say you should consider moving away from *Ubuntu and to something more "developer-friendly" such as Gentoo, where the command to install meld is just: emerge meld ...done. And yes, that's an install from source. I just did it myself, and it took right at one minute. Gentoo really needs a high-end computer to run fast. Tell that to the twelve year old machine here in our living room, running latest Gentoo profile with KDE 4.x all with no problem. FWIW, the same meld takes 7 seconds to install on my ubuntu. That includes fetching the package from the internet (1-2 seconds). Probably even faster on Arch. Sure, and my wife's Kubuntu machine would probably do the same -- since *Ubuntu installs pre-compiled binaries (some packages are available as source, as I recall, but very few). I acknowledge that you disclaimed your statement with a "FWIW" but I have to say it isn't much of a comparison: pre-compiled binaries versus locally built from source. I only really brought up how long it took because of Walter's "spend an afternoon" comment anyhow, so really we both "win" in this case. ;) And yes, I'm an unashamed Gentoo advocate to begin with. Been using it as both server and personal desktop OS for years now. (Of course half or more of what I love about it is portage, which can be used with other distros -- and BSD! -- although I know nothing about how one sets that up.) -- Chris N-S Jan 12 2011 Walter Bright <newshound2 digitalmars.com> writes: Vladimir Panteleev wrote: On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Yeah, I could spend an afternoon doing that. sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) (Yes, I just tested it on a Ubuntu install, albeit 10.10. No, no ./configure needed. For anyone else who tries this and didn't already have meld, you may need to apt-get install python-gtk2 manually.) It doesn't work: walter mercury:~$ ./buildmeld [sudo] password for walter: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to find a source package for meld --2011-01-18 21:35:07-- http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2%0D Resolving ftp.gnome.org... 130.239.18.163, 130.239.18.173 Connecting to ftp.gnome.org|130.239.18.163|:80... connected. HTTP request sent, awaiting response... 404 Not Found tar: meld-1.5.0.tar.bz2\r: Cannot open: No such file or directory tar: Error is not recoverable: exiting now tar: Child returned status 2 tar: Error exit delayed from previous errors : No such file or directoryld-1.5.0 '. Stop. No rule to make target install Jan 18 2011 KennyTM~ <kennytm gmail.com> writes: On Jan 19, 11 13:38, Walter Bright wrote: On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Yeah, I could spend an afternoon doing that. sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) (Yes, I just tested it on a Ubuntu install, albeit 10.10. No, no ./configure needed. For anyone else who tries this and didn't already have meld, you may need to apt-get install python-gtk2 manually.) It doesn't work: walter mercury:~$./buildmeld [sudo] password for walter: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to find a source package for meld --2011-01-18 21:35:07-- http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2%0D Resolving ftp.gnome.org... 130.239.18.163, 130.239.18.173 Connecting to ftp.gnome.org|130.239.18.163|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2011-01-18 21:35:08 ERROR 404: Not Found. tar: meld-1.5.0.tar.bz2\r: Cannot open: No such file or directory tar: Error is not recoverable: exiting now tar: Child returned status 2 tar: Error exit delayed from previous errors : No such file or directoryld-1.5.0 : command not found: make '. Stop. No rule to make target install You should use LF ending, not CRLF ending. Jan 18 2011 Walter Bright <newshound2 digitalmars.com> writes: KennyTM~ wrote: You should use LF ending, not CRLF ending. I never thought of that. Fixing that, it gets further, but still innumerable errors: walter mercury:~$ ./buildmeld [sudo] password for walter: Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: autoconf automake1.7 autotools-dev cdbs debhelper fdupes gettext gnome-pkg-tools html2text intltool intltool-debian libmail-sendmail-perl libsys-hostname-long-perl m4 po-debconf python-dev python2.5-dev 0 upgraded, 17 newly installed, 0 to remove and 0 not upgraded. Need to get 7387kB of archives. After this operation, 23.9MB of additional disk space will be used. Do you want to continue [Y/n]? Y WARNING: The following packages cannot be authenticated! m4 autoconf autotools-dev automake1.7 html2text gettext intltool-debian po-debconf debhelper fdupes intltool cdbs gnome-pkg-tools libsys-hostname-long-perl libmail-sendmail-perl python2.5-dev python-dev Install these packages without verification [y/N]? y Get:1 http://ca.archive.ubuntu.com intrepid/main m4 1.4.11-1 [263kB] Err http://ca.archive.ubuntu.com intrepid/main autoconf 2.61-7ubuntu1 Err http://ca.archive.ubuntu.com intrepid/main autotools-dev 20080123.1 Get:2 http://ca.archive.ubuntu.com intrepid/main automake1.7 1.7.9-9 [391kB] Get:3 http://ca.archive.ubuntu.com intrepid/main html2text 1.3.2a-5 [95.6kB] Err http://ca.archive.ubuntu.com intrepid/main gettext 0.17-3ubuntu2 Get:4 http://ca.archive.ubuntu.com intrepid/main intltool-debian 0.35.0+20060710.1 [31.6kB] Get:5 http://ca.archive.ubuntu.com intrepid/main po-debconf 1.0.15ubuntu1 [237kB] Err http://ca.archive.ubuntu.com intrepid/main debhelper 7.0.13ubuntu1 Get:6 http://ca.archive.ubuntu.com intrepid/main fdupes 1.50-PR2-1 [19.1kB] Err http://ca.archive.ubuntu.com intrepid/main intltool 0.40.5-0ubuntu1 Err http://ca.archive.ubuntu.com intrepid/main cdbs 0.4.52ubuntu7 Err http://ca.archive.ubuntu.com intrepid/main gnome-pkg-tools 0.13.6ubuntu1 Get:7 http://ca.archive.ubuntu.com intrepid/main libsys-hostname-long-perl 1.4-2 [11.4kB] Err http://ca.archive.ubuntu.com intrepid/main libmail-sendmail-perl 0.79-5 Err http://ca.archive.ubuntu.com intrepid-updates/main python2.5-dev 2.5.2-11.1ubuntu1.1 Err http://ca.archive.ubuntu.com intrepid/main python-dev 2.5.2-1ubuntu1 Err http://security.ubuntu.com intrepid-security/main python2.5-dev 2.5.2-11.1ubuntu1.1 Fetched 1050kB in 2s (403kB/s) Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/a/autoconf/autoconf_2 61-7ubuntu1_all.deb Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/a/autotools-dev/autotools-de _20080123.1_all.deb Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/g/gettext/gettext_0.1 -3ubuntu2_amd64.deb Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/d/debhelper/debhelper_7 0.13ubuntu1_all.deb Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/i/intltool/intltool_0.4 .5-0ubuntu1_all.deb Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/c/cdbs/cdbs_0.4.52ubuntu7_all.deb Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/g/gnome-pkg-tools/gnome-pkg-tools_0 13.6ubuntu1_all.deb Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/libm/libmail-sendmail-perl/libmail-sendmail perl_0.79-5_all.deb Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/python2.5/python2.5-dev_2.5.2-11. ubuntu1.1_amd64.deb Failed to fetch http://ca.archive.ubuntu.com/ubuntu/pool/main/p/python-defaults/python-dev_2. .2-1ubuntu1_all.deb E: Unable to fetch some archives, try running apt-get update or apt-get --fix-missing. E: Failed to process build dependencies --2011-01-19 03:07:16-- http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 Resolving ftp.gnome.org... 130.239.18.163, 130.239.18.173 Connecting to ftp.gnome.org|130.239.18.163|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 330845 (323K) [application/x-bzip2] Saving to: meld-1.5.0.tar.bz2' 100%[=============================================================>] 330,845 179K/s in 1.8s 2011-01-19 03:07:19 (179 KB/s) - meld-1.5.0.tar.bz2' saved [330845/330845] python tools/install_paths \ libdir=/usr/local/lib/meld \ localedir=/usr/local/share/locale \ helpdir=/usr/local/share/gnome/help/meld \ sharedir=/usr/local/share/meld \ < bin/meld > bin/meld.install python tools/install_paths \ libdir=/usr/local/lib/meld \ localedir=/usr/local/share/locale \ helpdir=/usr/local/share/gnome/help/meld \ sharedir=/usr/local/share/meld \ < meld/paths.py > meld/paths.py.install intltool-merge -d po data/meld.desktop.in data/meld.desktop make: *** [meld.desktop] Error 127 intltool-merge -d po data/meld.desktop.in data/meld.desktop make: *** [meld.desktop] Error 127 walter mercury:~$ Jan 19 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Wed, 19 Jan 2011 13:11:07 +0200, Walter Bright <newshound2 digitalmars.com> wrote: KennyTM~ wrote: You should use LF ending, not CRLF ending. I never thought of that. Fixing that, it gets further, but still innumerable errors: If apt-get update doesn't fix it, only an update will - looks like your Ubuntu version is so old, Canonical is no longer maintaining repositories for it. The only alternative is downloading and installing the components manually, and that probably will take half a day :P -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 19 2011 Walter Bright <newshound2 digitalmars.com> writes: Vladimir Panteleev wrote: On Wed, 19 Jan 2011 13:11:07 +0200, Walter Bright <newshound2 digitalmars.com> wrote: KennyTM~ wrote: You should use LF ending, not CRLF ending. I never thought of that. Fixing that, it gets further, but still innumerable errors: If apt-get update doesn't fix it, only an update will - looks like your Ubuntu version is so old, Canonical is no longer maintaining repositories for it. The only alternative is downloading and installing the components manually, and that probably will take half a day :P Yeah, I figured that. Thanks for the try, anyway! Jan 19 2011 retard <re tard.com.invalid> writes: Wed, 19 Jan 2011 03:11:07 -0800, Walter Bright wrote: KennyTM~ wrote: You should use LF ending, not CRLF ending. I never thought of that. Fixing that, it gets further, but still innumerable errors: [snip] I already told you in message digitalmars.d:126586 "..your Ubuntu version isn't supported anymore. They might have already removed the package repositories for unsupported versions and that might indeed lead to problems" It's exactly like using Windows 3.11 now. Totally unsupported. I'd so sad the leader of the D language is so incompetent with open source technologies. If you really want to stick with outdated operating system versions, why don't you install all the "stable" and "important" services on some headless virtual server (on another machine) and update the latest Ubuntu on your main desktop. It's hard to believe making backups of your /home/walter is so hard. That ought to be everything you need to do with desktop Ubuntu.. Jan 19 2011 retard <re tard.com.invalid> writes: Wed, 19 Jan 2011 19:15:54 +0000, retard wrote: Wed, 19 Jan 2011 03:11:07 -0800, Walter Bright wrote: KennyTM~ wrote: You should use LF ending, not CRLF ending. I never thought of that. Fixing that, it gets further, but still innumerable errors: [snip] I already told you in message digitalmars.d:126586 "..your Ubuntu version isn't supported anymore. They might have already removed the package repositories for unsupported versions and that might indeed lead to problems" So.. the situation is so bad that you can't install ANY packages anymore. Accidently removing packages can make the system unbootable and those application are gone for good (unless you do a fresh reinstall). My bet is that if it isn't already impossible to upgrade to a new version, when they remove the repositories for the next Ubuntu version, you're completely fucked up. Jan 19 2011 Gour <gour atmarama.net> writes: On Wed, 19 Jan 2011 19:15:54 +0000 (UTC) retard <re tard.com.invalid> wrote: "..your Ubuntu version isn't supported anymore. They might have already removed the package repositories for unsupported versions and that might indeed lead to problems" That's why we wrote it would be better to use some rolling release like Archlinux where distro cannot become so outdated that it's not possible to upgrade easily. Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ---------------------------------------------------------------- Jan 19 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Wed, 19 Jan 2011 23:18:13 +0200, Gour <gour atmarama.net> wrote: On Wed, 19 Jan 2011 19:15:54 +0000 (UTC) retard <re tard.com.invalid> wrote: "..your Ubuntu version isn't supported anymore. They might have already removed the package repositories for unsupported versions and that might indeed lead to problems" That's why we wrote it would be better to use some rolling release like Archlinux where distro cannot become so outdated that it's not possible to upgrade easily. Walter needs something he can install and get on with compiler hacking. ArchLinux sounds quite far from that. I'd just recommend upgrading to an Ubuntu LTS (to also minimize the requirement of familiarizing yourself with a new distribution). -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 19 2011 Jeff Nowakowski <jeff dilacero.org> writes: On 01/19/2011 04:18 PM, Gour wrote: That's why we wrote it would be better to use some rolling release like Archlinux where distro cannot become so outdated that it's not possible to upgrade easily. https://wiki.archlinux.org/index.php/FAQ : "Q) Why would I not want to use Arch? A) [...] you do not have the ability/time/desire for a 'do-ityourself' GNU/Linux distribution" I also don't see how Archlinux protects you from an outdated system. It's up to you to update your system. The longer you wait, the more chance incompatibilities creep in. However, the tradeoff is that if you update weekly or monthly, then you will spend more time encountering problems between upgrades. There's no silver bullet here. Personally, I think you should just suck it up, make a backup of your system (which you should be doing routinely anyways), and upgrade once a year. The worst case scenario is that you re-install from scratch. It's probably better to do that once in a while anyways, as cruft tends to accumulate when upgrading in place. Jan 19 2011 Gary Whatmore <no spam.sp> writes: Jeff Nowakowski Wrote: On 01/19/2011 04:18 PM, Gour wrote: That's why we wrote it would be better to use some rolling release like Archlinux where distro cannot become so outdated that it's not possible to upgrade easily. https://wiki.archlinux.org/index.php/FAQ : "Q) Why would I not want to use Arch? A) [...] you do not have the ability/time/desire for a 'do-ityourself' GNU/Linux distribution" This is something the Gentoo and Arch fanboys don't get. They don't have any idea how little time a typical Ubuntu user spends maintaining the system and installing updates. The best solution is to hire some familiar with computers (e.g. nephew with chocolate). It's almost free and they will want to spend hours configuring your system. This way you spend none of your own time maintaining. Another option is to turn on all automatic updates. Everything happens in the background. It might ask for a sudo password once in a week. In any case the Ubuntu user spends less than 10 minutes per month maintaining the system. It's possible but you need compatible hardware (Nvidia graphics and Wifi without a proprietary firmware, at least). You can't beat that. I also don't see how Archlinux protects you from an outdated system. It's up to you to update your system. The longer you wait, the more chance incompatibilities creep in. I personally use CentOS for anything stable. I *Was* a huge Gentoo fanboy, but the compilation simply takes too much time, and something is constantly broken if you enable ~x86 packages. I've also tried Arch. All the cool kids use it, BUT it doesn't automatically handle any configuration files in /etc and even worse, if you enable the "unstable" community repositories, the packages won't stay there long in the repository - a few days! The replacement policy is nuts. One of the packages was already removed from the server before pacman (the package manager) started downloading it! Arch is a pure community based distro for hardcore enthusiastics. It's fundamentally incompatible with stability. However, the tradeoff is that if you update weekly or monthly, then you will spend more time encountering problems between upgrades. There's no silver bullet here. Yes. Although I fail to see why upgrating Ubuntu is so hard. It only takes one hour or two every 6 months or every 3 years. The daily security updates should work automatically just like in Windows. Personally, I think you should just suck it up, make a backup of your system (which you should be doing routinely anyways), and upgrade once a year. Dissing Walter has become a sad tradition here. I'm sure a long time software professional knows how to make backups and he has likely written his own backup software and RAID drivers before you were even born. The reason Waltzy feels so clumsy in Linux world is probably the Windows XP attitude we all long time Windows users suffer from. Many powerusers are still using Windows XP, and it has a long term support plan. The support might last forever. You've updated Windows XP only three times. Probably 20 versions of Ubuntu have appeared since Windows XP was launched. Ubuntu is stuck with the "we MUST release SOMETHING at least every 3 years" just like WIndows did before XP: Win 3.11 -> 95 -> 98 -> XP (all intervals exactly 3 years). Jan 19 2011 Gour <gour atmarama.net> writes: On Wed, 19 Jan 2011 21:57:46 -0500 Gary Whatmore <no spam.sp> wrote: This is something the Gentoo and Arch fanboys don't get.=20 First of all I spent >5yrs with Gentoo before jumping to Arch and those are really two different beasts. With Arch I practically have zero-admin time after I did my 1st install. They don't have any idea how little time a typical Ubuntu user spends maintaining the system and installing updates. Moreover, I spent enough time servicing Ubuntu for new Linux users (refugees from Windows) and upgrading (*)Ubuntu from e.g. 8.10 to 10.10 was never easy and smooth, while with Arch there is no such thing as 'no packages for my version'. Another option is to turn on all automatic updates. Everything happens in the background. It might ask for a sudo password once in a week. What if automatic update breaks something which happens? With Arch and without automatic update I can always wait few days to be sure that new stuff (e.g. kernel) do not bring some undesired regressions. I personally use CentOS for anything stable. I *Was* a huge Gentoo fanboy, but the compilation simply takes too much time, and something is constantly broken if you enable ~x86 packages.=20 /me nods having experience with ~amd64 I've also tried Arch. All the cool kids use it, BUT it doesn't automatica= lly handle any configuration files in /etc and even worse,=20 You can see what new config files are there (*.pacnew) and simple merge with e.g. meld/ediff is something what I'd always prefer than having my conf files automatically overwritten. ;) if you enable the "unstable" community repositories, the packages won't stay there long in the repository - a few days! The replacement policy is nuts. One of the packages was already removed from the server before pacman (the package manager) started downloading it! Arch is a pure community based distro for hardcore enthusiastics. It's fundamentally incompatible with stability. You gott what you asked for. :-) What you say does not make sense: You speak about Ubuntu's stability and comparing it with using 'unstable' packages in Arch which means you're comparing apples with oranges... Unstable packages (now 'testing') are for devs & geeks, but normal users can have very decent system by using core/extra/community packages only without much hassle. Sincerely, Gour (satisfied with Arch, just offering friendly advice and not caring much what OS people are using as long as it's Linux) --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ---------------------------------------------------------------- Jan 19 2011 Gour <gour atmarama.net> writes: On Wed, 19 Jan 2011 20:28:43 -0500 Jeff Nowakowski <jeff dilacero.org> wrote: "Q) Why would I not want to use Arch? =20 A) [...] you do not have the ability/time/desire for a 'do-ityourself' GNU/Linux distribution" I've feeling that you just copied the above from FAQ and never actually tried Archlinux. The "do-it-yourself" from the above means that in Arch user is not forced to use specific DE, WM etc., can choose whether he prefers WiCD over NM etc. On the Ubuntu side, there are, afaik, at least 3 distros achieving the same thing (Ubuntu, KUbuntu, XUBuntu) with less flexibility. :-D I also don't see how Archlinux protects you from an outdated system.=20 It's up to you to update your system. The longer you wait, the more=20 chance incompatibilities creep in. That's not true...In Arch there is simply no Arch-8.10 or Arch-10.10 which means that whenever you update your system package manager will simply pull all the packages which are required for desired kernel, gcc version etc. I service my father-in-law's machine and he is practically illiterate for computers and often I do not update his system for months knowing well he does not require bleeding edge stuff, so when there is time for the update it is simple: pacman -Syu with some more packages in a queue than on my machine. ;) Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major undertaking (I'm familiar with it since '99 when I used SuSE and had experience with deps hell.) Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ---------------------------------------------------------------- Jan 19 2011 Jeff Nowakowski <jeff dilacero.org> writes: On 01/20/2011 12:24 AM, Gour wrote: I've feeling that you just copied the above from FAQ and never actually tried Archlinux. No, I haven't tried it. I'm not going to try every OS that comes down the pike. If the FAQ says that you're going to have to be more of an expert with your system, then I believe it. If it's wrong, then maybe you can push them to update it. The "do-it-yourself" from the above means that in Arch user is not forced to use specific DE, WM etc., can choose whether he prefers WiCD over NM etc. So instead of giving you a bunch of sane defaults, you have to make a bunch of choices up front. That's a heavy investment of time, especially for somebody unfamiliar with Linux. That's not true...In Arch there is simply no Arch-8.10 or Arch-10.10 which means that whenever you update your system package manager will simply pull all the packages which are required for desired kernel, gcc version etc. The upgrade problems are still there. *Every package* you upgrade has a chance to be incompatible with the previous version. The longer you wait, the more incompatibilities there will be. Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major undertaking (I'm familiar with it since '99 when I used SuSE and had experience with deps hell.) Highlighting the problem of waiting too long to upgrade. You're skipping an entire release. I'd like to see you take a snapshot of Arch from 2008, use the system for 2 years without updating, and then upgrade to the latest packages. Do you think Arch is going to magically have no problems? Jan 20 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Thursday 20 January 2011 03:39:08 Jeff Nowakowski wrote: On 01/20/2011 12:24 AM, Gour wrote: I've feeling that you just copied the above from FAQ and never actually tried Archlinux. No, I haven't tried it. I'm not going to try every OS that comes down the pike. If the FAQ says that you're going to have to be more of an expert with your system, then I believe it. If it's wrong, then maybe you can push them to update it. The "do-it-yourself" from the above means that in Arch user is not forced to use specific DE, WM etc., can choose whether he prefers WiCD over NM etc. So instead of giving you a bunch of sane defaults, you have to make a bunch of choices up front. That's a heavy investment of time, especially for somebody unfamiliar with Linux. That's not true...In Arch there is simply no Arch-8.10 or Arch-10.10 which means that whenever you update your system package manager will simply pull all the packages which are required for desired kernel, gcc version etc. The upgrade problems are still there. *Every package* you upgrade has a chance to be incompatible with the previous version. The longer you wait, the more incompatibilities there will be. Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major undertaking (I'm familiar with it since '99 when I used SuSE and had experience with deps hell.) Highlighting the problem of waiting too long to upgrade. You're skipping an entire release. I'd like to see you take a snapshot of Arch from 2008, use the system for 2 years without updating, and then upgrade to the latest packages. Do you think Arch is going to magically have no problems? There is no question that Arch takes more to manage than a number of other distros. However, it takes _far_ less than Gentoo. Things generally just work in Arch, whereas you often have to figure out how to fix problems when updating on Gentoo. I wouldn't suggest Arch to a beginner, but I'd be _far_ more likely to suggest it to someone than Gentoo. Arch really doesn't take all that much to maintain, but it does have a higher setup cost than your average distro, and you do have to do some level of manual configuration that I'd expect a more typical distro like OpenSuSE or Ubuntu to take care of for you. So, I'd say that your view of Arch is likely a bit skewed, because you haven't actually used it, but it still definitely isn't a distro where you just stick in the install disk, install it, and then go on your merry way either. - Jonathan M Davis Jan 20 2011 Gour <gour atmarama.net> writes: On Thu, 20 Jan 2011 06:39:08 -0500 Jeff Nowakowski <jeff dilacero.org> wrote: No, I haven't tried it. I'm not going to try every OS that comes down=20 the pike.=20 Then please, without any offense, do not give advises about something which you did not try. I did use Ubuntu... So instead of giving you a bunch of sane defaults, you have to make a=20 bunch of choices up front.=20 Right. That's why there is no need for separate distro based on DE user wants to have, iow, by simple: pacman -Sy xfce4 you get XFCE environment installed...same wit GNOME & KDE. That's a heavy investment of time, especially for somebody unfamiliar with Linux. Again, you're speaking without personal experience... Moreover, in TDPL's foreword, Walter speaks about himself as "..of an engineer..", so I'm sure he is capable to handle The Arch Way (see section Simplicity at https://wiki.archlinux.org/index.php/Arch_Linux) which says: "The Arch Way is a philosophy aimed at keeping it simple. The Arch Linux base system is quite simply the minimal, yet functional GNU/Linux environment; the Linux kernel, GNU toolchain, and a handful of optional, extra command line utilities like links and Vi. This clean and simple starting point provides the foundation for expanding the system into whatever the user requires." and from there install one of the major DEs (GNOME, KDE or XFCE) to name a few. The upgrade problems are still there. *Every package* you upgrade has a chance to be incompatible with the previous version. The longer you=20 wait, the more incompatibilities there will be. There are no incompatibilities...if I upgrade kernel, it means that package manager will figure out what components has to be updated... Remember: there are no packages 'tagged' for any specific release! Highlighting the problem of waiting too long to upgrade. You're skipping an entire release. I'd like to see you take a snapshot of Arch from 2008, use the system for 2 years without updating, and then upgrade to the latest packages. Do you think Arch is going to magically have no problems? I did upgrade on my father-in-law's machine which was more then 1yr old without any problem. You think there must be some magic to handle it...ask some FreeBSD user how they do it. ;) Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ---------------------------------------------------------------- Jan 20 2011 Jeff Nowakowski <jeff dilacero.org> writes: On 01/20/2011 07:33 AM, Gour wrote: On Thu, 20 Jan 2011 06:39:08 -0500 Jeff Nowakowski<jeff dilacero.org> wrote: No, I haven't tried it. I'm not going to try every OS that comes down the pike. Then please, without any offense, do not give advises about something which you did not try. I did use Ubuntu... Please yourself. I quoted from the FAQ from the distribution's main site. If that's wrong, then Arch has a big public relations problem. I can make rational arguments without having used a system. That's a heavy investment of time, especially for somebody unfamiliar with Linux. Again, you're speaking without personal experience... From Jonathan M Davis in this thread: "There is no question that Arch takes more to manage than a number of other distros. [..] Arch really doesn't take all that much to maintain, but it does have a higher setup cost than your average distro, and you do have to do some level of manual configuration that I'd expect a more typical distro like OpenSuSE or Ubuntu to take care of for you." Moreover, in TDPL's foreword, Walter speaks about himself as "..of an engineer..", so I'm sure he is capable to handle The Arch Way You're talking about somebody who is running a nearly 3 year old version of Ubuntu because he had one bad upgrade experience, and is probably running software full of security holes. If he can't spend a day a year to upgrade his OS, what makes you think he wants to spend time on a more demanding distro? There are no incompatibilities...if I upgrade kernel, it means that package manager will figure out what components has to be updated... And what happens when the kernel, as it often does, changes the way it handles things like devices, and expects the administrator to do some tweaking to handle the upgrade? What happens when you upgrade X and it no longer supports your video chipset? What happens when you upgrade something as basic as the DNS library, and it reacts badly with your router? Is Arch going to maintain your config files for you? Is it going to handle jumping 2 or 3 versions for software that can only upgrade from one version ago? These are real world examples. Arch is not some magic distribution that will make upgrade problems go away. Remember: there are no packages 'tagged' for any specific release! Yeah, I know. I also run Debian Testing, which is a "rolling release". I'm not some Ubuntu noob. Jan 20 2011 Gour <gour atmarama.net> writes: On Thu, 20 Jan 2011 09:19:54 -0500 Jeff Nowakowski <jeff dilacero.org> wrote: Please yourself. I quoted from the FAQ from the distribution's main=20 site. If that's wrong, then Arch has a big public relations problem. Arch simply does not offer false promises that system will "Just work". Still, I see the number of users has rapidly increased in last year or so...mostly Ubuntu 'refugees'. You're talking about somebody who is running a nearly 3 year old version of Ubuntu because he had one bad upgrade experience, and is probably running software full of security holes. If he can't spend a day a year to upgrade his OS, what makes you think he wants to spend time on a more demanding distro? My point is that due to rolling-release nature, distro like Archlinux require less work in the case when one 'forgets' to update OS and has to do 'major upgrade'. It was my experience with both SuSE and Ubuntu. And what happens when the kernel, as it often does, changes the way it handles things like devices, and expects the administrator to do some tweaking to handle the upgrade? What happens when you upgrade X and it no longer supports your video chipset? What happens when you upgrade something as basic as the DNS library, and it reacts badly with your router? In the above cases, there is no distro which can save you from some admin work...and the problem is that people expect such system where, often, the only admin work is re-install. :-) These are real world examples. Arch is not some magic distribution that will make upgrade problems go away. Sure. But upgrade in rolling-release distro is simpler than in Ubuntu-like one. Yeah, I know. I also run Debian Testing, which is a "rolling release". I'm not some Ubuntu noob. Heh, I could imagine you like 'bleeding edge' considering you lived with ~x86 and 'unstable' repos. ;) Now we may close this thread...at least, I do not have anything more to say. :-D Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ---------------------------------------------------------------- Jan 20 2011 retard <re tard.com.invalid> writes: Thu, 20 Jan 2011 13:33:58 +0100, Gour wrote: On Thu, 20 Jan 2011 06:39:08 -0500 Jeff Nowakowski <jeff dilacero.org> wrote: No, I haven't tried it. I'm not going to try every OS that comes down the pike. Then please, without any offense, do not give advises about something which you did not try. I did use Ubuntu... So instead of giving you a bunch of sane defaults, you have to make a bunch of choices up front. Right. That's why there is no need for separate distro based on DE user wants to have, iow, by simple: pacman -Sy xfce4 you get XFCE environment installed...same wit GNOME & KDE. It's the same in Ubuntu. You can install the minimal server build and install the DE of your choice in similar way. The prebuilt images (Ubuntu, Kubuntu, Xubuntu, Lubuntu, ...) are for those who can't decide and don't want to fire up a terminal for writing down bash code. In Ubuntu you have even more choice. The huge metapackage or just the DE packages, with or without recommendations. A similar system just doesn't exist for Arch. For the lazy user Ubuntu is a dream come true - you never need to launch xterm if you don't want. There's a GUI for almost everything. That's a heavy investment of time, especially for somebody unfamiliar with Linux. Again, you're speaking without personal experience... You're apparently a Linux fan, but have you got any idea which BSD or Solaris distro to choose? The choice isn't as simple if you have zero experience with the system. Moreover, in TDPL's foreword, Walter speaks about himself as "..of an engineer..", so I'm sure he is capable to handle The Arch Way (see section Simplicity at https://wiki.archlinux.org/index.php/Arch_Linux) which says: "The Arch Way is a philosophy aimed at keeping it simple. I think Walter's system isn't up to date because he is a lazy bitch. Has all the required competence but never bothers to update if it just works (tm). The same philosophy can be found in dmd/dmc. The code is sometimes hard to read and hard to maintain and buggy, but if it works, why fix it? The Arch Linux base system is quite simply the minimal, yet functional GNU/Linux environment; the Linux kernel, GNU toolchain, and a handful of optional, extra command line utilities like links and Vi. This clean and simple starting point provides the foundation for expanding the system into whatever the user requires." and from there install one of the major DEs (GNOME, KDE or XFCE) to name a few. I'd give my vote for LFS. It's quite minimal. The upgrade problems are still there. *Every package* you upgrade has a chance to be incompatible with the previous version. The longer you wait, the more incompatibilities there will be. There are no incompatibilities...if I upgrade kernel, it means that package manager will figure out what components has to be updated... Remember: there are no packages 'tagged' for any specific release! Even if the package manager works perfectly, the repositories have bugs in their dependencies and other metadata. Highlighting the problem of waiting too long to upgrade. You're skipping an entire release. I'd like to see you take a snapshot of Arch from 2008, use the system for 2 years without updating, and then upgrade to the latest packages. Do you think Arch is going to magically have no problems? I did upgrade on my father-in-law's machine which was more then 1yr old without any problem. You think there must be some magic to handle it...ask some FreeBSD user how they do it. ;) There's usually a safe upgrade period. If you wait too much, package conflicts will appear. It's simply too much work to keep rules for all possible package transitions. For example libc update breaks kde, but it's now called kde4. The system needs to know how to first remove all kde4 packages and update them. Chromium was previously a game, but now it's a browser, the game becomes chromium-bsu or something. I have hard time believing the minimal Arch does all this. Jan 20 2011 Andrew Wiley <debio264 gmail.com> writes: On Thu, Jan 20, 2011 at 5:39 AM, Jeff Nowakowski <jeff dilacero.org> wrote: On 01/20/2011 12:24 AM, Gour wrote: Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major undertaking (I'm familiar with it since '99 when I used SuSE and had experience with deps hell.) Highlighting the problem of waiting too long to upgrade. You're skipping an entire release. I'd like to see you take a snapshot of Arch from 2008, use the system for 2 years without updating, and then upgrade to the latest packages. Do you think Arch is going to magically have no problems? Ironically, I did this a few years back with an Arch box that was setup, then banished to the TV room as a gaming system, then reconnected to the internet about two years later (I didn't have wifi at the time, and I still haven't put a wifi dongle on the box). It updated with no problems and is still operating happily. Now, I was expecting problems, but on the other hand, since *all* packages are in the rolling release model and individual packages contain specific version dependencies, problems are harder to find than you'd think. Jan 20 2011 Walter Bright <newshound2 digitalmars.com> writes: Gour wrote: Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major undertaking (I'm familiar with it since '99 when I used SuSE and had experience with deps hell.) I finally did do it, but as a clean install. I found an old 160G drive, wiped it, and installed 10.10 on it. (Amusingly, the "About Ubuntu" box says it's version 11.04, and /etc/issue says it's 10.10.) I attached the old drive through a usb port, and copied everything on it into a subdirectory of the new drive. Then, file and directory by file and directory, I moved the files into place on my new home directory. The main difficulty was the . files, which litter the home directory and gawd knows what they do or are for. This is one reason why I tend to stick with all defaults. The only real problem I've run into (so far) is the sunbird calendar has been unceremoniously dumped from Ubuntu. The data file for it is in some crappy binary format, so poof, there goes all my calendar data. Why do I bother with this crap. I think I'll stick with the ipod calendar. Phobos1 on 10.10 is dying in its unit tests because Ubuntu changed how gcc's strtof() works. Erratic floating point is typical of C runtime library implementations (the transcendentals are often sloppily done), which is why more and more Phobos uses its own implementations that Don has put together. Jan 21 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/22/11 12:35 AM, Walter Bright wrote: Phobos1 on 10.10 is dying in its unit tests because Ubuntu changed how gcc's strtof() works. Erratic floating point is typical of C runtime library implementations (the transcendentals are often sloppily done), which is why more and more Phobos uses its own implementations that Don has put together. I think we must change to our own routines anyway. One strategic advantage of native implementations of strtof (and the converse sprintf etc.) is that we can CTFE them, which opens the door to interesting applications. I have something CTFEable starting from your dmc code, but never got around to handling all of the small details. Andrei Jan 21 2011 Walter Bright <newshound2 digitalmars.com> writes: Andrei Alexandrescu wrote: On 1/22/11 12:35 AM, Walter Bright wrote: Phobos1 on 10.10 is dying in its unit tests because Ubuntu changed how gcc's strtof() works. Erratic floating point is typical of C runtime library implementations (the transcendentals are often sloppily done), which is why more and more Phobos uses its own implementations that Don has put together. I think we must change to our own routines anyway. One strategic advantage of native implementations of strtof (and the converse sprintf etc.) is that we can CTFE them, which opens the door to interesting applications. We can also make our own conversion routines consistent, pure, thread safe and locale-independent. Jan 22 2011 Gour <gour atmarama.net> writes: On Fri, 21 Jan 2011 22:35:55 -0800 Walter Bright <newshound2 digitalmars.com> wrote: Hello Walter, I finally did do it, but as a clean install. I found an old 160G drive, wiped it, and installed 10.10 on it. (Amusingly, the "About Ubuntu" box says it's version 11.04, and /etc/issue says it's 10.10.) in last few days I did a little research about 'easy-to-admin OS-es' and the result of it is: PC-BSD (http://www.pcbsd.org/) or Ubuntu-like PC-BSD with a GUI installer. The possible advantage is that here OS means kernel+tools which are strictly separated fro the other 'add-on' packages which should guarantee smooth upgrade. Moreover, PC-BSD deploys so called PBI installer which installs every 'add-on' package with complete set of required libs preventing upgrade-breakages. Of course, some more HD space is wasted but this will be resolved in June/July 9.0 release where such add-on packages will use kind of spool of common-libs, but the main OS is still kept intact. I'm very seriously considering to put PC-BSD on my desktop and of several others in order to reduce my admin-time required to maint. all those machines. Finally, there is latest dmd2 available in 'ports' and having you on PC-BSD will make it even better. ;) Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ---------------------------------------------------------------- Jan 21 2011 Walter Bright <newshound2 digitalmars.com> writes: Gour wrote: I'm very seriously considering to put PC-BSD on my desktop and of several others in order to reduce my admin-time required to maint. all those machines. OSX is the only OS (besides DOS) I've had that had painless upgrades. Windows upgrades never ever work in place (at least not for me). You have to wipe the disk, install from scratch, then reinstall all your apps and reconfigure them. You're hosed if you lose an install disk or the serial # for it. Ubuntu isn't much better, but at least you don't have to worry about install disks and serial numbers. I just keep a list of sudo apt-get commands! That works pretty good until the Ubuntu gods just decide to drop kick your apps (like sunbird) out of the repository. Jan 22 2011 spir <denis.spir gmail.com> writes: On 01/22/2011 09:58 AM, Walter Bright wrote: Gour wrote: I'm very seriously considering to put PC-BSD on my desktop and of several others in order to reduce my admin-time required to maint. all those machines. OSX is the only OS (besides DOS) I've had that had painless upgrades. Windows upgrades never ever work in place (at least not for me). You have to wipe the disk, install from scratch, then reinstall all your apps and reconfigure them. Same in my experience. I had to recently re-install from scratch my ubuntu box recently (recently why I have the same amusing info as Walter telling my machine runs ubuntu 11.04?) because 10.04 --> 10.10 upgrade miserably crashed (at the end of the procedure, indeed). And no, this is not due to me naughtily the system; instead while userland is highly personalised I do not touch the rest (mainly my brain cannot cope with the standard unix filesystem hierarchy). (I use linux only for philosophical reasons, else would happily switch to mac.) Denis _________________ vita es estrany spir.wikidot.com Jan 22 2011 Christopher Nicholson-Sauls <ibisbasenji gmail.com> writes: On 01/22/11 03:57, spir wrote: On 01/22/2011 09:58 AM, Walter Bright wrote: Gour wrote: I'm very seriously considering to put PC-BSD on my desktop and of several others in order to reduce my admin-time required to maint. all those machines. OSX is the only OS (besides DOS) I've had that had painless upgrades. Windows upgrades never ever work in place (at least not for me). You have to wipe the disk, install from scratch, then reinstall all your apps and reconfigure them. Same in my experience. I had to recently re-install from scratch my ubuntu box recently (recently why I have the same amusing info as Walter telling my machine runs ubuntu 11.04?) because 10.04 --> 10.10 upgrade miserably crashed (at the end of the procedure, indeed). And no, this is not due to me naughtily the system; instead while userland is highly personalised I do not touch the rest (mainly my brain cannot cope with the standard unix filesystem hierarchy). (I use linux only for philosophical reasons, else would happily switch to mac.) Denis _________________ vita es estrany spir.wikidot.com Likewise I had occasional issues with Ubuntu/Kubuntu upgrades when I was using it. Moving to a "rolling release" style distribution (Gentoo) changed everything for me. I haven't had a single major issue since. (I put "major" in there because there have been issues, but of the "glance at the screen, notice the blocker, type out the one very short command that will fix it, continue updating" variety.) Heck, updating has proven so straight-forward that I check for updates almost daily. I originally went to Linux for "philosophical" reasons, as well, but now that I've had a taste of a "real distro" I really don't have any interest in toying around with anything else. I do have a Windows install for development/testing purposes though... running in a VM. ;) Amazingly enough, Windows seems to be perfectly happy running as a guest O/S. If it was possible to do the same with OS X, I would. (Anyone know a little trick for that, using VirtualBox?) -- Chris N-S Jan 22 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/22/11, Christopher Nicholson-Sauls <ibisbasenji gmail.com> wrote: If it was possible to do the same with OS X, I would. (Anyone know a little trick for that, using VirtualBox?) No, that is illegal! But you might want to do a google search for *cough* iDeneb *cough* and download vmware player. :p Jan 22 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 22.01.2011 17:36, schrieb Andrej Mitrovic: On 1/22/11, Christopher Nicholson-Sauls<ibisbasenji gmail.com> wrote: If it was possible to do the same with OS X, I would. (Anyone know a little trick for that, using VirtualBox?) No, that is illegal! But you might want to do a google search for *cough* iDeneb *cough* and download vmware player. :p A google search for virtualbox osx takwing may be interesting as well. Jan 22 2011 retard <re tard.com.invalid> writes: Sat, 22 Jan 2011 00:58:59 -0800, Walter Bright wrote: Gour wrote: I'm very seriously considering to put PC-BSD on my desktop and of several others in order to reduce my admin-time required to maint. all those machines. OSX is the only OS (besides DOS) I've had that had painless upgrades. Windows upgrades never ever work in place (at least not for me). You have to wipe the disk, install from scratch, then reinstall all your apps and reconfigure them. You're hosed if you lose an install disk or the serial # for it. Ubuntu isn't much better, but at least you don't have to worry about install disks and serial numbers. I just keep a list of sudo apt-get commands! That works pretty good until the Ubuntu gods just decide to drop kick your apps (like sunbird) out of the repository. Don't blame Ubuntu, http://en.wikipedia.org/wiki/Mozilla_Sunbird "It was developed as a standalone version of the Lightning calendar and scheduling extension for Mozilla Thunderbird. Development of Sunbird was ended with release 1.0 beta 1 to focus on development of Mozilla Lightning.[6][7]" Ubuntu doesn't drop support for widely used software. I'd use Google's Calendar instead. Jan 22 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 22.01.2011 13:21, schrieb retard: Sat, 22 Jan 2011 00:58:59 -0800, Walter Bright wrote: Gour wrote: I'm very seriously considering to put PC-BSD on my desktop and of several others in order to reduce my admin-time required to maint. all those machines. OSX is the only OS (besides DOS) I've had that had painless upgrades. Windows upgrades never ever work in place (at least not for me). You have to wipe the disk, install from scratch, then reinstall all your apps and reconfigure them. You're hosed if you lose an install disk or the serial # for it. Ubuntu isn't much better, but at least you don't have to worry about install disks and serial numbers. I just keep a list of sudo apt-get commands! That works pretty good until the Ubuntu gods just decide to drop kick your apps (like sunbird) out of the repository. Don't blame Ubuntu, http://en.wikipedia.org/wiki/Mozilla_Sunbird "It was developed as a standalone version of the Lightning calendar and scheduling extension for Mozilla Thunderbird. Development of Sunbird was ended with release 1.0 beta 1 to focus on development of Mozilla Lightning.[6][7]" Ubuntu doesn't drop support for widely used software. I'd use Google's Calendar instead. Ubuntu doesn't include Lightning, either. Walter: You could add the lightning plugin to your thunderbird from the mozilla page: http://www.mozilla.org/projects/calendar/lightning/index.html Hopefully it automatically imports your sunbird data or is at least able to import it manually. Jan 22 2011 Walter Bright <newshound2 digitalmars.com> writes: retard wrote: Ubuntu doesn't drop support for widely used software. I'd use Google's Calendar instead. I'm really not interested in Google owning my private data. Jan 22 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/22/11 3:03 PM, Walter Bright wrote: retard wrote: Ubuntu doesn't drop support for widely used software. I'd use Google's Calendar instead. I'm really not interested in Google owning my private data. Google takes email privacy very seriously. Only last week they fired an employee for snooping through someone else's email. http://techcrunch.com/2010/09/14/google-engineer-spying-fired/ Of course, that could be framed either as a success or a failure of Google's privacy enforcement. Several companies are using gmail for their email infrastructure. Andrei Jan 22 2011 Walter Bright <newshound2 digitalmars.com> writes: Andrei Alexandrescu wrote: Google takes email privacy very seriously. Only last week they fired an employee for snooping through someone else's email. http://techcrunch.com/2010/09/14/google-engineer-spying-fired/ That's good to know. On the other hand, Google keeps information forever. Ownership, management, policies, and practices change. And to be frank, the fact that some of Google's employees are not authorized to look at emails means that others are. And those others are subject to the usual human weaknesses of bribery, blackmail, temptation, voyeurism, etc. Heck, the White House is famous for being a leaky organization, despite extensive security. I rent storage on Amazon's servers, but the stuff I send there is encrypted before Amazon ever sees it. I don't have to depend at all on Amazon having a privacy policy or airtight security. Google could implement their Calendar, etc., stuff the same way. I'd even pay for it (like I pay Amazon). Jan 22 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Sat, 22 Jan 2011 08:35:55 +0200, Walter Bright <newshound2 digitalmars.com> wrote: The only real problem I've run into (so far) is the sunbird calendar has been unceremoniously dumped from Ubuntu. The data file for it is in some crappy binary format, so poof, there goes all my calendar data. Hi Walter, have you seen this yet? It's an article on how to import your calendar data in Lightning, the official Thunderbird calendar extension. I hope it'll help you: http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-from-ubuntu-10-04-lucid-lynx/ -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 22 2011 spir <denis.spir gmail.com> writes: On 01/22/2011 10:34 AM, Vladimir Panteleev wrote: On Sat, 22 Jan 2011 08:35:55 +0200, Walter Bright <newshound2 digitalmars.com> wrote: The only real problem I've run into (so far) is the sunbird calendar has been unceremoniously dumped from Ubuntu. The data file for it is in some crappy binary format, so poof, there goes all my calendar data. Hi Walter, have you seen this yet? It's an article on how to import your calendar data in Lightning, the official Thunderbird calendar extension. I hope it'll help you: http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-from-ubuntu-10-04-lucid-lynx/ Yes, lightning seems to have been the successor mozilla project to sunbird (wikipedia would probably tell you more). Denis _________________ vita es estrany spir.wikidot.com Jan 22 2011 Walter Bright <newshound2 digitalmars.com> writes: Vladimir Panteleev wrote: http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-from-ubun u-10-04-lucid-lynx/ Thanks for finding that. But I think I'll stick for now with the ipod's calendar. It's more useful anyway, as it moves with me. Jan 22 2011 retard <re tard.com.invalid> writes: Sat, 22 Jan 2011 13:12:26 -0800, Walter Bright wrote: Vladimir Panteleev wrote: http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed- from-ubuntu-10-04-lucid-lynx/ Thanks for finding that. But I think I'll stick for now with the ipod's calendar. It's more useful anyway, as it moves with me. Does the new Ubuntu overall work better than the old one? Would be amazing if the media players are still all broken. Jan 22 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 22.01.2011 22:31, schrieb retard: Sat, 22 Jan 2011 13:12:26 -0800, Walter Bright wrote: Vladimir Panteleev wrote: http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed- from-ubuntu-10-04-lucid-lynx/ Thanks for finding that. But I think I'll stick for now with the ipod's calendar. It's more useful anyway, as it moves with me. Does the new Ubuntu overall work better than the old one? Would be amazing if the media players are still all broken. And is the support for the graphics chip better, i.e. can you use full resolution? Jan 22 2011 Walter Bright <newshound2 digitalmars.com> writes: Daniel Gibson wrote: And is the support for the graphics chip better, i.e. can you use full resolution? Yes, it recognized my resolution automatically. That's a nice improvement. Jan 22 2011 Walter Bright <newshound2 digitalmars.com> writes: retard wrote: Does the new Ubuntu overall work better than the old one? Would be amazing if the media players are still all broken. I haven't tried the sound yet, but the video playback definitely is better. Though the whole screen flashes now and then, like the video mode is being reset badly. This is new behavior. Jan 22 2011 retard <re tard.com.invalid> writes: Sat, 22 Jan 2011 14:47:48 -0800, Walter Bright wrote: retard wrote: Does the new Ubuntu overall work better than the old one? Would be amazing if the media players are still all broken. I haven't tried the sound yet, but the video playback definitely is better. Though the whole screen flashes now and then, like the video mode is being reset badly. This is new behavior. Ubuntu probably uses Compiz if you have enabled desktop effects. This might not work with ati's (open source) drivers. Turning Compiz off makes it use a "safer" 2d engine. In Gnome the setting can be changed here http://www.howtoforge.com/enabling-compiz-fusion-on-an-ubuntu-10.10- desktop-nvidia-geforce-8200-p2 It's the "none" option in the second figure. Jan 22 2011 spir <denis.spir gmail.com> writes: On 01/22/2011 07:35 AM, Walter Bright wrote: I finally did do it, but as a clean install. I found an old 160G drive, wiped it, and installed 10.10 on it. (Amusingly, the "About Ubuntu" box says it's version 11.04, and /etc/issue says it's 10.10.) Same for me ;-) _________________ vita es estrany spir.wikidot.com Jan 22 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Saturday 08 January 2011 14:34:19 Walter Bright wrote: Michel Fortin wrote: I know you had your reasons, but perhaps it's time for you upgrade to a more recent version of Ubuntu? That version is what comes with Hardy Heron (april 2008). <https://launchpad.net/ubuntu/+source/meld> I know. The last time I upgraded Ubuntu in place it f****d up my system so bad I had to wipe the disk and start all over. It still won't play videos correctly (the previous Ubuntu worked fine), the rhythmbox music player never worked again, it wiped out all my virtual boxes, I had to spend hours googling around trying to figure out how to reconfigure the display driver so the monitor worked again, etc. I learned my lesson! Yes, I'll eventually upgrade, but I'm not looking forward to it. A while back I took to putting /home on a separate partition from the root directory, and I never upgrade in place. I replace the whole thing every time. Maybe it's because I've never trusted Windows to do it correctly, but I've never thought that it was a good idea to upgrade in place. I never do it on any OS. And by having /home on its own partition, it doesn't affect my data. Sometimes, config files can be an issue, but worse case, that's fixed by blowing them away. Of course, I use neither Ubuntu nor Gnome, so I don't know what the exact caveats are with those. And at the moment, I'm primarily using Arch, which has rolling releases, so unless I screw up my machine, I pretty much don't have to worry about updating the OS to a new release. The pieces get updated as you go, and it works just fine (unlike Gentoo, where you can be screwed on updates because a particular package didn't build). Of course, I'd have got nuts having an installation as old as yours appears to be, so we're obviously of very different mindsets when dealing with upgrades. Still, I'd advise making /home its own partition and then doing clean installs of the OS whenever you upgrade. - Jonathan M Davis Jan 08 2011 Walter Bright <newshound2 digitalmars.com> writes: Jonathan M Davis wrote: Of course, I'd have got nuts having an installation as old as yours appears to be, I think it's less than a year old. Jan 08 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Saturday 08 January 2011 20:16:05 Walter Bright wrote: Jonathan M Davis wrote: Of course, I'd have got nuts having an installation as old as yours appears to be, I think it's less than a year old. Hmm. I thought that someone said that the version you were running was from 2008. But if it's less than a year old, that generally isn't a big deal unless there's a particular package that you really want updated, and there are usually ways to deal with one package. I do quite like the rolling release model though. - Jonathan M Davis Jan 08 2011 Russel Winder <russel russel.org.uk> writes: On Sat, 2011-01-08 at 18:22 -0800, Jonathan M Davis wrote: On Saturday 08 January 2011 14:34:19 Walter Bright wrote: Michel Fortin wrote: I know you had your reasons, but perhaps it's time for you upgrade to= a more recent version of Ubuntu? That version is what comes with Hardy Heron (april 2008). <https://launchpad.net/ubuntu/+source/meld> =20 I know. The last time I upgraded Ubuntu in place it f****d up my system= so bad I had to wipe the disk and start all over. It still won't play vide= os correctly (the previous Ubuntu worked fine), the rhythmbox music player never worked again, it wiped out all my virtual boxes, I had to spend hours googling around trying to figure out how to reconfigure the displ= ay driver so the monitor worked again, etc. Personally I have never had an in-place Ubuntu upgrade f*** up any of my machines -- server, workstation, laptops. However, I really feel your pain about video and audio tools on Ubuntu, these have regularly been screwed over by an upgrade. There are also other niggles: my current beef is that the 10.10 upgrade stopped my Lenovo T500 from going to sleep when closing the lid. On my laptops I have two system partitions so as to dual boot between Debian Testing and the latest released Ubuntu. This way I find I always have a reasonably up to date system that works as I want it. Currently I am having a Debian Testing period pending 11.04 being released. I learned my lesson! Yes, I'll eventually upgrade, but I'm not looking forward to it. =20 A while back I took to putting /home on a separate partition from the roo= t=20 directory, and I never upgrade in place. I replace the whole thing every = time.=20 Maybe it's because I've never trusted Windows to do it correctly, but I'v= e never=20 thought that it was a good idea to upgrade in place. I never do it on any= OS.=20 And by having /home on its own partition, it doesn't affect my data. Some= times,=20 config files can be an issue, but worse case, that's fixed by blowing the= m away. Of=20 course, I use neither Ubuntu nor Gnome, so I don't know what the exact ca= veats=20 are with those. And at the moment, I'm primarily using Arch, which has ro= lling=20 releases, so unless I screw up my machine, I pretty much don't have to wo= rry=20 about updating the OS to a new release. The pieces get updated as you go,= and it=20 works just fine (unlike Gentoo, where you can be screwed on updates becau= se a=20 particular package didn't build). I always have /home as a separate partition as I dual boot between Debian and Ubuntu from two distinct / partitions. But I always upgrade in place -- but having the dual boot makes for trivially easy recovery from problems. Debian Testing is really a rolling release but it tends to be behind Ubuntu is some versions of things and ahead in others. Also Ubuntu has non-free stuff that is forbidden on Debian. Not to mention the F$$F$$ fiasco! =20 Of course, I'd have got nuts having an installation as old as yours appea= rs to=20 be, so we're obviously of very different mindsets when dealing with upgra= des.=20 Still, I'd advise making /home its own partition and then doing clean ins= talls=20 of the OS whenever you upgrade. I have to agree about being two years behind, this is too far to be comfortable. I would definitely recommend an upgrade to Walter's machines --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 09 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 09.01.2011 12:16, schrieb Russel Winder: Debian Testing is really a rolling release but it tends to be behind Ubuntu is some versions of things and ahead in others. Also Ubuntu has non-free stuff that is forbidden on Debian. Not to mention the F$$F$$ fiasco! That's why debian has contrib and non-free repos. Ok, lame and libdvdcss are missing, but can be easily obtained from debian-multimedia.org (the latter is missing in ubuntu as well). What's F$$F$$? Cheers, - Daniel Jan 09 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Sun, 09 Jan 2011 21:02:13 +0200, Daniel Gibson <metalcaedes gmail.com> wrote: What's F$$F$$? FireFox/IceWeasel: http://en.wikipedia.org/wiki/Mozilla_Corporation_software_rebranded_by_the_Debian_project -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 09 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 09.01.2011 22:16, schrieb Vladimir Panteleev: On Sun, 09 Jan 2011 21:02:13 +0200, Daniel Gibson <metalcaedes gmail.com> wrote: What's F$$F$$? FireFox/IceWeasel: http://en.wikipedia.org/wiki/Mozilla_Corporation_software_rebranded_by_the_Debian_project Oh that. I couldn't care less if my browser is called Firefox or Iceweasel. Firefox plugins/extensions work with Iceweasel without any problems and I'm not aware of any issues caused by that rebranding. Also you're free to not install Iceweasel and install the Firefox binaries from mozilla.com. (The same is true for Thunderbird/Icedove) Cheers, - Daniel Jan 09 2011 retard <re tard.com.invalid> writes: Sat, 08 Jan 2011 14:34:19 -0800, Walter Bright wrote: Michel Fortin wrote: I know you had your reasons, but perhaps it's time for you upgrade to a more recent version of Ubuntu? That version is what comes with Hardy Heron (april 2008). <https://launchpad.net/ubuntu/+source/meld> I know. The last time I upgraded Ubuntu in place it f****d up my system so bad I had to wipe the disk and start all over. It still won't play videos correctly (the previous Ubuntu worked fine), the rhythmbox music player never worked again, it wiped out all my virtual boxes, I had to spend hours googling around trying to figure out how to reconfigure the display driver so the monitor worked again, etc. I learned my lesson! Yes, I'll eventually upgrade, but I'm not looking forward to it. Ubuntu has a menu entry for "restricted drivers". It provides support for both ATI/AMD (Radeon 8500 or better, appeared in 1998 or 1999!) and NVIDIA cards (Geforce 256 or better, appeared in 1999!) and I think it automatically suggests (a pop-up window) correct drivers in the latest releases right after the first install. Intel chips are automatically supported by the open source drivers. VIA and S3 may or may not work out of the box. I'm just a bit curious to know what GPU you have? If it's some ancient VLB (vesa local bus) or ISA card, I can donate$15 for buying one that uses AGP or PCI Express. Ubuntu doesn't support all video formats out of the box, but the media players and browsers automatically suggest loading missing drivers. At least in the 3 or 4 latest releases. Maybe the problem isn't the encoder, it might be the Linux incompatible web site. compile it yourself. Yeah, I could spend an afternoon doing that. Another one of these jokes? Probably one of the best compiler authors in the whole world uses a whole afternoon doing something (compiling a program) that total Linux noobs do in less than 30 minutes with the help Jan 10 2011 Walter Bright <newshound2 digitalmars.com> writes: retard wrote: Ubuntu has a menu entry for "restricted drivers". It provides support for both ATI/AMD (Radeon 8500 or better, appeared in 1998 or 1999!) and NVIDIA cards (Geforce 256 or better, appeared in 1999!) and I think it automatically suggests (a pop-up window) correct drivers in the latest releases right after the first install. Intel chips are automatically supported by the open source drivers. VIA and S3 may or may not work out of the box. I'm just a bit curious to know what GPU you have? If it's some ancient VLB (vesa local bus) or ISA card, I can donate $15 for buying one that uses AGP or PCI Express. Ubuntu doesn't support all video formats out of the box, but the media players and browsers automatically suggest loading missing drivers. At least in the 3 or 4 latest releases. Maybe the problem isn't the encoder, it might be the Linux incompatible web site. My mobo is an ASUS M2A-VM. No graphics cards, or any other cards plugged into it. It's hardly weird or wacky or old (it was new at the time I bought it to install Ubuntu). My display is 1920 x 1200. That just seems to cause grief for Ubuntu. Windows has no issues at all with it. Or you could download the latest version from meld's website and compile it yourself. Yeah, I could spend an afternoon doing that. Another one of these jokes? Probably one of the best compiler authors in the whole world uses a whole afternoon doing something (compiling a program) On the other hand, I regularly get emails from people with 10 years of coding experience who are flummoxed by a "symbol not defined" message from the linker. :-) that total Linux noobs do in less than 30 minutes with the help of Google search. Yeah, I've spent a lot of time googling for solutions to problems with Linux. You know what? I get pages of results from support forums - every solution is different and comes with statements like "seems to work", "doesn't work for me", etc. The advice is clearly from people who do not know what they are doing, and randomly stab at things, and these are the first page of google results. Jan 11 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/11/11, Walter Bright <newshound2 digitalmars.com> wrote: Yeah, I've spent a lot of time googling for solutions to problems with Linux. You know what? I get pages of results from support forums - every solution is different and comes with statements like "seems to work", "doesn't work for me", etc. The advice is clearly from people who do not know what they are doing, and randomly stab at things, and these are the first page of google results. That's my biggest problem with Linux. Having technical problems is not the issue, finding the right solution in the sea of forum posts is the problem. When I have a problem with something breaking down on Windows, most of the time a single google search reveals the solution in one of the very first results (it's either on an MSDN page or one of the more popular forums). This probably has to do with the fact that regular users have either XP or Vista/7 installed. So there's really not much searching you have to do. Once someone posts a solution, that's the end of the story (more often than not). I remember a few years ago I got a copy of Ubuntu, and I wanted to disable antialiased fonts (they looked really bad on the screen). So I simply disabled antialised fonts in one of the display property panels, and thought that would be the end of the story. But guess what? Firefox and other applications don't want to follow the OS settings, and they will override your settings and render websites with antialised fonts. So now I had to search for half an hour to find a solution. I finally find a guide where the instructions are to edit the etc/fonts.conf file. So I do that. But antialised fonts were still active. So I spend another 30 minutes looking for more information. Then I run into another website where the instructions are to delete a couple of fonts from the system. OK. I run the command in the terminal, I reset the system, but then on boot x-org crashes. So now I'm left with a blinking cursor on a black background, with no knowledge whatsover of how to fix x-org or reset its settings. Instinctively I run "help" and I get back a list of 100 commands, but I can only read the last 20 and I've no idea how to scroll up to read more. So, hours wasted and a broken Linux system all because I wanted to disable antialiased fonts. But that's just one example. I have plenty more. GRUB failing to install properly, GRUB failing to detect all of my windows installations, and then there's that "wubi" which *does not* work. Of course there are numerous guides on how to fix wubi as well but those fail too. Bleh. I like open-source, Linux - the kernel might be awesome for all I know, but the distributions plain-simple *suck*. Jan 11 2011 Walter Bright <newshound2 digitalmars.com> writes: Andrej Mitrovic wrote: That's my biggest problem with Linux. Having technical problems is not the issue, finding the right solution in the sea of forum posts is the problem. The worst ones begin with "you might try this..." or "I think this might work, but YMMV..." How do these wind up being the top ranked results by google? Who embeds links to that stuff? My experience with Windows is, like yours, the opposite. The top ranked result will be correct and to the point. No weasel wording. Jan 11 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 11.01.2011 22:36, schrieb Walter Bright: Andrej Mitrovic wrote: That's my biggest problem with Linux. Having technical problems is not the issue, finding the right solution in the sea of forum posts is the problem. The worst ones begin with "you might try this..." or "I think this might work, but YMMV..." How do these wind up being the top ranked results by google? Who embeds links to that stuff? My experience with Windows is, like yours, the opposite. The top ranked result will be correct and to the point. No weasel wording. Those results are often in big forums like ubuntuforums.org that get a lot of links etc, so even if one thread doesn't have many incoming links, it may still get a top ranking. Also my blog entries (hosted at wordpress.com) get on the google frontpage when looking for the specific topic, even though my blog is mostly unknown, has 2-20 visitors per day and almost no incoming links.. Googles algorithms often do seem like voodoo ;) Also: Many problems (and their correct solutions) heavily depend on your system. What desktop environment is used, what additional stuff (dbus, hal, ...) is used, what are the versions of this stuff (and X.org), what distribution is used, ... There may be different default configurations shipped depending on what distribution (and what version of that distribution) you use, ... So there often is no single correct answer that will work for anyone. Still, in my experience those HOWTOs often work (it may help to look at multiple HOWTOs and compare them if you're not sure, if it applies to your system) or at least push you in the right direction. Cheers, - Daniel Jan 11 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: Google does seem to take into account whatever information it has on you, which might explain why your own blog is a top result for you. If I log out of Google and delete my preferences, searching for "D" won't find anything about the D language in the top results. But if I log in and search "D" again, the D website will be the top result. Jan 11 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Andrej Mitrovic Wrote: Google does seem to take into account whatever information it has on you, which might explain why your own blog is a top result for you. If I log out of Google and delete my preferences, searching for "D" won't find anything about the D language in the top results. But if I log in and search "D" again, the D website will be the top result. Best place to go for ranking information on your website: https://www.google.com/webmasters/tools/home?hl=en&pli=1 Need to show you own the site though. Jan 11 2011 "Nick Sabalausky" <a a.a> writes: "Daniel Gibson" <metalcaedes gmail.com> wrote in message news:igijc7$27pv$4 digitalmars.com... Am 11.01.2011 22:36, schrieb Walter Bright: Andrej Mitrovic wrote: That's my biggest problem with Linux. Having technical problems is not the issue, finding the right solution in the sea of forum posts is the problem. The worst ones begin with "you might try this..." or "I think this might work, but YMMV..." How do these wind up being the top ranked results by google? Who embeds links to that stuff? My experience with Windows is, like yours, the opposite. The top ranked result will be correct and to the point. No weasel wording. Those results are often in big forums like ubuntuforums.org that get a lot of links etc, so even if one thread doesn't have many incoming links, it may still get a top ranking. Also my blog entries (hosted at wordpress.com) get on the google frontpage when looking for the specific topic, even though my blog is mostly unknown, has 2-20 visitors per day and almost no incoming links.. Googles algorithms often do seem like voodoo ;) Also: Many problems (and their correct solutions) heavily depend on your system. What desktop environment is used, what additional stuff (dbus, hal, ...) is used, what are the versions of this stuff (and X.org), what distribution is used, ... There may be different default configurations shipped depending on what distribution (and what version of that distribution) you use, ... So there often is no single correct answer that will work for anyone. Still, in my experience those HOWTOs often work (it may help to look at multiple HOWTOs and compare them if you're not sure, if it applies to your system) or at least push you in the right direction. That's probably one of the biggest things that's always bothered me about linux (not that there aren't plenty of other things that bother me about every other OS in existence). For something that's considered so standards-compliant/standards-friendly (compared to, say MS), it's painfully *un*standardized. Jan 11 2011 Christopher Nicholson-Sauls <ibisbasenji gmail.com> writes: On 01/11/11 15:36, Walter Bright wrote: Andrej Mitrovic wrote: That's my biggest problem with Linux. Having technical problems is not the issue, finding the right solution in the sea of forum posts is the problem. The worst ones begin with "you might try this..." or "I think this might work, but YMMV..." How do these wind up being the top ranked results by google? Who embeds links to that stuff? Nobody. The first "secret" of Linux tech-help is that most real help is dished out via IRC channels. One just has to visit their distro of choice's website and there will inevitably be a listing for an IRC channel or two -- often with one specifically for new users. It may sound like a lot of trouble, but getting help from the source and live is worlds above scanning forum posts hoping the people posting know more than you do. And thanks to the global scale of most FOSS communities, there's always someone around -- and it didn't cost you a dime. That said, a little more integrity in the forums that do exist would be nice. LinuxQuestions.org seems to be one of the better ones, from what I've seen of it. -- Chris N-S Jan 12 2011 Russel Winder <russel russel.org.uk> writes: On Tue, 2011-01-11 at 11:53 -0800, Walter Bright wrote: [ . . . ] My display is 1920 x 1200. That just seems to cause grief for Ubuntu. Win= dows=20 has no issues at all with it. [ . . . ] My 1900x1200 screen is fine with Ubuntu. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 11 2011 retard <re tard.com.invalid> writes: Sat, 08 Jan 2011 12:36:39 -0800, Walter Bright wrote: Lutger Blijdestijn wrote: Walter Bright wrote: Looks like meld itself used git as it's repository. I'd be surprised if it doesn't work with git. :-) I use git for other projects, and meld doesn't work with it. What version are you on? I'm using 1.3.2 and its supports git and mercurial (also committing from inside meld & stuff, I take it this is what you mean with supporting a vcs). The one that comes with: sudo apt-get meld 1.1.5.1 One thing came to my mind. Unless you're using Ubuntu 8.04 LTS, your Ubuntu version isn't supported anymore. They might have already removed the package repositories for unsupported versions and that might indeed lead to problems with graphics and video players as you said. The support for desktop 8.04 and 9.10 is also nearing its end (April this year). I'd recommend backing up your /home and installing 10.04 LTS or 10.10 instead. Jan 10 2011 Walter Bright <newshound2 digitalmars.com> writes: retard wrote: One thing came to my mind. Unless you're using Ubuntu 8.04 LTS, I'm using 8.10, and I've noticed that no more updates are coming. your Ubuntu version isn't supported anymore. They might have already removed the package repositories for unsupported versions and that might indeed lead to problems with graphics and video players as you said. What annoyed the heck out of me was the earlier (7.xx) version of Ubuntu *did* work. The support for desktop 8.04 and 9.10 is also nearing its end (April this year). I'd recommend backing up your /home and installing 10.04 LTS or 10.10 instead. Yeah, I know I'll be forced to upgrade soon. One thing that'll make it easier is I abandoned using Ubuntu for multimedia. For example, to play Pandora I now just plug my ipod into my stereo <g>. I just stopped using youtube on Ubuntu, as I got tired of the video randomly going black, freezing, etc. Jan 11 2011 Jacob Carlborg <doob me.com> writes: On 2011-01-06 21:12, Michel Fortin wrote: On 2011-01-06 15:01:18 -0500, Jesse Phillips <jessekphillips+D gmail.com> said: Walter Bright Wrote: A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. I probably wasn't on the list at the time. I'm certainly interested, it'd certainly make it easier for me, as I'm using git locally to access that repo. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 You mean this: https://github.com/braddr/dmd/commit/f1fde96227394f926da5841db4f0f4c608b2e7b2 That's only if you're hosted on github. If you install on your own server, git comes with a web interface that looks like this (pointing to a specific diff): <http://repo.or.cz/w/LinuxKernelDevelopmentProcess.git/commitdiff/d7214dcb5be988a5c7d407f907c7e7e789872d24> Also when I want an overview with git I just type gitk on the command line to bring a window where I can browser the graph of forks, merges and commits and see the diff for each commit. Here's what gitk looks like: <http://michael-prokop.at/blog/img/gitk.png> Have you heard of gitx? I suggest you take a look at it: http://gitx.frim.nl/index.html . It's a Mac OS X GUI for git. where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. One thing I would dearly like is to be able to merge branches using meld. http://meld.sourceforge.net/ Git does not have its own merge tool. You are free to use meld. Though there is gitmerge which can run meld as the merge tool. Looks like meld itself used git as it's repository. I'd be surprised if it doesn't work with git. :-) -- /Jacob Carlborg Jan 08 2011 Russel Winder <russel russel.org.uk> writes: On Sat, 2011-01-08 at 15:38 +0100, Jacob Carlborg wrote: On 2011-01-06 21:12, Michel Fortin wrote: [ . . . ] Also when I want an overview with git I just type gitk on the command line t= o bring a window where I can browser the graph of forks, merges and commits and see the diff for each commit. Here's what gitk looks like: <http://michael-prokop.at/blog/img/gitk.png> gitk uses the Tk widget set which looks hideous -- at least on my Ubuntu and Debian systems. I now use gitg which appears to have the same functionality, but looks almost acceptable. There is also git-gui. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 08 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Russel Winder Wrote: gitk uses the Tk widget set which looks hideous -- at least on my Ubuntu and Debian systems. I now use gitg which appears to have the same functionality, but looks almost acceptable. There is also git-gui. Funny thing, gitk looks better on Windows. I don't care though. My friend ends up with font that is barely readable. Also there is giggle: http://live.gnome.org/giggle I like the name, but I still prefer gitk. Jan 08 2011 Jacob Carlborg <doob me.com> writes: On 2011-01-08 16:01, Russel Winder wrote: On Sat, 2011-01-08 at 15:38 +0100, Jacob Carlborg wrote: On 2011-01-06 21:12, Michel Fortin wrote: [ . . . ] Also when I want an overview with git I just type gitk on the command line to bring a window where I can browser the graph of forks, merges and commits and see the diff for each commit. Here's what gitk looks like: <http://michael-prokop.at/blog/img/gitk.png> gitk uses the Tk widget set which looks hideous -- at least on my Ubuntu and Debian systems. I now use gitg which appears to have the same functionality, but looks almost acceptable. There is also git-gui. Doesn't the Tk widget set look hideous on all platforms. I can't understand why both Mercurial and git have chosen to use Tk for the GUI. -- /Jacob Carlborg Jan 08 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Saturday 08 January 2011 10:39:39 Jacob Carlborg wrote: On 2011-01-08 16:01, Russel Winder wrote: On Sat, 2011-01-08 at 15:38 +0100, Jacob Carlborg wrote: On 2011-01-06 21:12, Michel Fortin wrote: [ . . . ] Also when I want an overview with git I just type gitk on the command line to bring a window where I can browser the graph of forks, merges and commits and see the diff for each commit. Here's what gitk looks like: <http://michael-prokop.at/blog/img/gitk.png> gitk uses the Tk widget set which looks hideous -- at least on my Ubuntu and Debian systems. I now use gitg which appears to have the same functionality, but looks almost acceptable. There is also git-gui. Doesn't the Tk widget set look hideous on all platforms. I can't understand why both Mercurial and git have chosen to use Tk for the GUI. Probably because you don't need much installed for them to work. About all you need is X. Now, I'd still rather that they'd picked a decent-looking GUI toolkit and just required it (_most_ people are running full desktop systems with the proper requirements installed and which will install them if a package needs them and they're not installed), but they were probably trying to make it work in pretty minimal environments. - Jonathan M Davis Jan 08 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: Git Extensions looks pretty sweet for use on Windows (I haven't tried it yet though): https://code.google.com/p/gitextensions/ Jan 08 2011 Walter Bright <newshound2 digitalmars.com> writes: Jesse Phillips wrote: Walter Bright Wrote: One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 You mean this: https://github.com/braddr/dmd/commit/f1fde96227394f926da5841db4f0f4c608b2e7b2 Yes, exactly. Good. One thing I would dearly like is to be able to merge branches using meld. http://meld.sourceforge.net/ Git does not have its own merge tool. You are free to use meld. Though there is gitmerge which can run meld as the merge tool. Jan 06 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Walter Bright Wrote: Eh, that's inferior. The svn will will highlight what part of a line is different, rather than just the whole line. As others have mentioned, it really isn't the CVS, I don't think the default SVN web server does the highlighting you want, it might not even do any highlighting. Trac should be able to provide the same functionality as found on github, though github will provide a lot more then just highlighting. Looks like meld itself used git as it's repository. I'd be surprised if it doesn't work with git. :-) I use git for other projects, and meld doesn't work with it. Found these links: http://www.muhuk.com/2008/11/adding-git-support-to-meld/ http://nathanhoad.net/how-to-meld-for-git-diffs-in-ubuntu-hardy So maybe that is what's missing. Jan 06 2011 Jean Crystof <news news.com> writes: Walter Bright Wrote: retard wrote: One thing came to my mind. Unless you're using Ubuntu 8.04 LTS, I'm using 8.10, and I've noticed that no more updates are coming. Huh! You should seriously consider upgrading. If you are running any kind of services in the system or browsing the web, you're exposed to both remote and local attacks. I know at least one local root exploit 8.10 is vulnerable to. It's just plainly stupid to use a distro after the support has died. Are you running Windows 98 still too? If you upgrade Ubuntu, do a clean install. Upgrading 8.10 in-place goes via -> 9.04 -> 9.10 -> 10.4 -> 10.10. Each one takes 1 or 2 hours. Clean install of Ubuntu 10.10 or 11.04 (soon available) will only take less than 30 minutes. The support for desktop 8.04 and 9.10 is also nearing its end (April this year). I'd recommend backing up your /home and installing 10.04 LTS or 10.10 instead. Yeah, I know I'll be forced to upgrade soon. Soon? Your system already sounds like it's broken. One thing that'll make it easier is I abandoned using Ubuntu for multimedia. For example, to play Pandora I now just plug my ipod into my stereo <g>. I just stopped using youtube on Ubuntu, as I got tired of the video randomly going black, freezing, etc. I'm using Amarok and Spotify. Both work fine. Jan 11 2011 Jean Crystof <news news.com> writes: Walter Bright Wrote: My mobo is an ASUS M2A-VM. No graphics cards, or any other cards plugged into it. It's hardly weird or wacky or old (it was new at the time I bought it to install Ubuntu). ASUS M2A-VM has 690G chipset. Wikipedia says: http://en.wikipedia.org/wiki/AMD_690_chipset_series#690G "AMD recently dropped support for Windows and Linux drivers made for Radeon X1250 graphics integrated in the 690G chipset, stating that users should use the open-source graphics drivers instead. The latest available AMD Linux driver for the 690G chipset is fglrx version 9.3, so all newer Linux distributions using this chipset are unsupported." Fast forward to this day: http://www.phoronix.com/scan.php?page=article&item=amd_driver_q111&num=2 Benchmark page says: the only available driver for your graphics gives only about 10-20% of the real performance. Why? ATI sucks on Linux. Don't buy ATI. Buy Nvidia instead: http://geizhals.at/a466974.html This is 3rd latest Nvidia GPU generation. How long support lasts? Ubuntu 10.10 still supports all Geforce 2+ which is 10 years old. I foretell Ubuntu 19.04 is last one supporting this. Use Nvidia and your problems are gone. Jan 11 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: Did you hear that, Walter? Just buy a 500$ video card so you can watch youtube videos on Linux. Easy. :D Jan 11 2011 Jean Crystof <news news.com> writes: Andrej Mitrovic Wrote: Did you hear that, Walter? Just buy a 500$video card so you can watch youtube videos on Linux. Easy. :D Dear Sir, did you even open the link? It's the cheapest Nvidia card I could find by googling for 30 seconds. 28,58 euros translates to$37. I can't promise that very old Geforce chips support 1920x1200 but at least the ones compatible with his PCI-express bus work perfectly. Maybe You were trying to be funny? Jan 11 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: Notice the smiley face -> :D Yeah I didn't check the price, it's only 30$. But there's no telling if that would work either. Also, dirt cheap video cards are almost certainly going to cause problems. Even if the drivers worked perfectly, a year down the road things will start breaking down. Cheap hardware is cheap for a reason. Jan 11 2011 Jean Crystof <news news.com> writes: Andrej Mitrovic Wrote: Notice the smiley face -> :D Yeah I didn't check the price, it's only 30$. But there's no telling if that would work either. I can tell from our hobbyist group's experience with Compiz, native Linux games, Wine, multiplatform OpenGL game development on Linux, and hardware accelerated video that all of these tasks had problems on our ATI hardware and no problems with Nvidia. Also, dirt cheap video cards are almost certainly going to cause problems. Even if the drivers worked perfectly, a year down the road things will start breaking down. Cheap hardware is cheap for a reason. That's not true. I suggested a low end card because if he's using integrated graphics now, there's no need for high end hardware. The reason why the price is lower is cheaper cards have smaller heatsinks, less fans or none at all, no advanced features (SLI), low frequency cores with most shaders disabled (They've sidestepped manufacturing defects by disabling broken cores), smaller memory bandwidth, less & cheaper memory modules without heatsinks. Just look at the circuit board. A high end graphics card is physically at least twice as large or even more. No wonder it costs more. The price goes up $100 just by buying the bigger heatsinks are fans. Claiming that low end components have shorter lifespan is ridiculous. Why does Ubuntu 10.10 still support cheap Geforce 2 MX then? Jan 11 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/12/11, Jean Crystof <news news.com> wrote: Claiming that low end components have shorter lifespan is ridiculous. You've never had computer equipment fail on you? Jan 12 2011 Walter Bright <newshound2 digitalmars.com> writes: Andrej Mitrovic wrote: On 1/12/11, Jean Crystof <news news.com> wrote: Claiming that low end components have shorter lifespan is ridiculous. You've never had computer equipment fail on you? I've had a lot of computer equipment. Failures I've had, ranked in order of most failures to least: keyboards power supplies hard drives fans monitors I've never had a CPU, memory, or mobo failure. Which is really kind of amazing. I did have a 3DFX board once, which failed after a couple years. Never bought another graphics card. The keyboards fail so often I keep a couple spares around. I buy cheap, bottom of the line equipment. I don't overclock them and I make sure there's plenty of airflow around the boxes. Jan 12 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Thu, 13 Jan 2011 05:43:27 +0200, Walter Bright <newshound2 digitalmars.com> wrote: The keyboards fail so often I keep a couple spares around. Let me guess, all cheap rubber-domes? Maybe you should have a look at some professional keyboards. Mechanical keyboards are quite durable, and feel much nicer to type on. -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 12 2011 Walter Bright <newshound2 digitalmars.com> writes: Vladimir Panteleev wrote: On Thu, 13 Jan 2011 05:43:27 +0200, Walter Bright <newshound2 digitalmars.com> wrote: The keyboards fail so often I keep a couple spares around. Let me guess, all cheap rubber-domes? Maybe you should have a look at some professional keyboards. Mechanical keyboards are quite durable, and feel much nicer to type on. Yup, the$9.99 ones. They also get things spilled on them, why ruin an expensive one? <g> Jan 12 2011 Caligo <iteronvexor gmail.com> writes: On Wed, Jan 12, 2011 at 11:33 PM, Walter Bright <newshound2 digitalmars.com>wrote: On Thu, 13 Jan 2011 05:43:27 +0200, Walter Bright < newshound2 digitalmars.com> wrote: The keyboards fail so often I keep a couple spares around. Let me guess, all cheap rubber-domes? Maybe you should have a look at some professional keyboards. Mechanical keyboards are quite durable, and feel much nicer to type on. Yup, the $9.99 ones. They also get things spilled on them, why ruin an expensive one? <g> http://www.daskeyboard.com/ or http://steelseries.com/us/products/keyboards/steelseries-7g expensive, I know, but who cares. You only live once! Jan 13 2011 "Nick Sabalausky" <a a.a> writes: "Walter Bright" <newshound2 digitalmars.com> wrote in message news:igm2um$2omg$1 digitalmars.com... Vladimir Panteleev wrote: On Thu, 13 Jan 2011 05:43:27 +0200, Walter Bright <newshound2 digitalmars.com> wrote: The keyboards fail so often I keep a couple spares around. Let me guess, all cheap rubber-domes? Maybe you should have a look at some professional keyboards. Mechanical keyboards are quite durable, and feel much nicer to type on. Yup, the$9.99 ones. They also get things spilled on them, why ruin an expensive one? <g> I've got a $6 one I've been using for years, and I frequently beat the shit out of it. And I mean literally just pounding on it, not to type, but just to beat :) With all the physical abuse I give this ultra-cheapie thing, I honestly can't believe it still works fine after all these years. "AOpen" gets my approval for keyboards :) (Heh, I actually had to turn it over to check the brand. I had no idea what it was.) I never spill anything on it, though. Jan 13 2011 Stanislav Blinov <blinov loniir.ru> writes: 14.01.2011 3:12, Nick Sabalausky ïèøåò: "Walter Bright"<newshound2 digitalmars.com> wrote in message news:igm2um$2omg$1 digitalmars.com... Vladimir Panteleev wrote: On Thu, 13 Jan 2011 05:43:27 +0200, Walter Bright <newshound2 digitalmars.com> wrote: The keyboards fail so often I keep a couple spares around. Let me guess, all cheap rubber-domes? Maybe you should have a look at some professional keyboards. Mechanical keyboards are quite durable, and feel much nicer to type on. Yup, the$9.99 ones. They also get things spilled on them, why ruin an expensive one?<g> I've got a $6 one I've been using for years, and I frequently beat the shit out of it. And I mean literally just pounding on it, not to type, but just to beat :) With all the physical abuse I give this ultra-cheapie thing, I honestly can't believe it still works fine after all these years. "AOpen" gets my approval for keyboards :) (Heh, I actually had to turn it over to check the brand. I had no idea what it was.) I never spill anything on it, though. I felt very depressed when my first keyboard failed - the rubber shocks got tired and started to tear. It served me for more than 10 years in everything from gaming to writing university reports to programming (pounding, dropping and spilling/sugaring included). And it was an old one - without all those annoying win-keys and stuff. Never got another one that would last at least a year. One of the recent ones died taking with it a USB port on the mobo (or maybe it was vice-versa, I don't know). Jan 14 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 13.01.2011 06:33, schrieb Walter Bright: Vladimir Panteleev wrote: On Thu, 13 Jan 2011 05:43:27 +0200, Walter Bright <newshound2 digitalmars.com> wrote: The keyboards fail so often I keep a couple spares around. Let me guess, all cheap rubber-domes? Maybe you should have a look at some professional keyboards. Mechanical keyboards are quite durable, and feel much nicer to type on. Yup, the$9.99 ones. They also get things spilled on them, why ruin an expensive one? <g> There are washable keyboards, e.g. http://h30094.www3.hp.com/product/sku/5110581/mfg_partno/VF097AA Cheers, - Daniel Jan 13 2011 Walter Bright <newshound2 digitalmars.com> writes: Daniel Gibson wrote: There are washable keyboards, e.g. http://h30094.www3.hp.com/product/sku/5110581/mfg_partno/VF097AA I know. But what I do works for me. I happen to like the action on the cheapo keyboards, and the key layout. I'll also throw one in my suitcase for a trip, 'cuz I hate my laptop keyboard. And I don't care if they get lost/destroyed on the trip. Jan 13 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: Lol Walter you're like me. I keep buying cheap keyboards all the time. I'm almost becoming one of those people that collect things all the time (well.. the difference being I throw the old ones in the trash). Right now I'm sporting this dirt-cheap Genius keyboard, I've just looked up the price and it's 5$. My neighbor gave it to me for free because he got two for some reason. You would think a 5$ keyboard sucks, but it's pretty sweet actually. The keys have a nice depth, and they're real easy to hit. The downside? They've put the freakin' sleep button right above the right cursor key. Now *that's* genius, Genius.. So I had to disable sleep mode. LOL! *However*, my trusty Logitech MX518 is standing strong with over 5 years of use. Actually, I did cut the cable by accident once. But I had a spare 10$Logitech mouse which happened to have the same connector that plugs in that little PCI board, so I just swapped the cables. (yay for hardware design reuse!). Jan 13 2011 Walter Bright <newshound2 digitalmars.com> writes: Andrej Mitrovic wrote: Lol Walter you're like me. I keep buying cheap keyboards all the time. I'm almost becoming one of those people that collect things all the time (well.. the difference being I throw the old ones in the trash). Right now I'm sporting this dirt-cheap Genius keyboard, I've just looked up the price and it's 5$. My neighbor gave it to me for free because he got two for some reason. You would think a 5$keyboard sucks, but it's pretty sweet actually. The keys have a nice depth, and they're real easy to hit. The downside? They've put the freakin' sleep button right above the right cursor key. Now *that's* genius, Genius.. So I had to disable sleep mode. LOL! My preferred keyboard layout has the \ key right above the Enter key. The problem is those ^%%^&*^*&^&*^ keyboards that have the \ key somewhere else, and the Enter key is extra large and in that spot. So guess what happens? If I want to delete foo\bar.c, I type in: del foo Enter Yikes! There goes my directory contents! I've done this too many times. I freakin hate those keyboards. I always check to make sure I'm not buying one, though they seem to be most of 'em. Jan 14 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 14.01.2011 04:46, schrieb Andrej Mitrovic: Lol Walter you're like me. I keep buying cheap keyboards all the time. I'm almost becoming one of those people that collect things all the time (well.. the difference being I throw the old ones in the trash). Right now I'm sporting this dirt-cheap Genius keyboard, I've just looked up the price and it's 5$. My neighbor gave it to me for free because he got two for some reason. You would think a 5$keyboard sucks, but it's pretty sweet actually. The keys have a nice depth, and they're real easy to hit. The downside? They've put the freakin' sleep button right above the right cursor key. Now *that's* genius, Genius.. So I had to disable sleep mode. LOL! Had something like that once, too. I just removed the key from the keyboard ;) Jan 14 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: I forgot to mention though, do *not* open up a MX518 unless you want to spend your day figuring out where all the tiny little pieces go. When I opened it the first time, all the pieces went flying in all directions. I've found all the pieces but putting them back together was a nightmare. Which piece goes where with which other piece and in what order.. Luckily I found a forum where someone else already took apart and assembled the same mouse, and even took pictures of it. There was really only this one final frustrating piece that I couldn't figure out which held the scroll wheel together and made that "clikclick" sound when you scroll. Jan 13 2011 Walter Bright <newshound2 digitalmars.com> writes: Andrej Mitrovic wrote: I've found all the pieces but putting them back together was a nightmare. Which piece goes where with which other piece and in what order.. No prob. I've got some tools in the basement that will take care of that. Jan 14 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Walter Bright Wrote: The keyboards fail so often I keep a couple spares around. I buy cheap, bottom of the line equipment. I don't overclock them and I make sure there's plenty of airflow around the boxes. Wow, I have never had a keyboard fail. I'm stilling using my first keyboard from 1998. Hell, I haven't even rubbed off any of the letters. I guess the only components I've had fail on me has been hard drive and CD/DVD drive. Monitor was about to go. Jan 12 2011 spir <denis.spir gmail.com> writes: On 01/13/2011 04:43 AM, Walter Bright wrote: Andrej Mitrovic wrote: On 1/12/11, Jean Crystof <news news.com> wrote: Claiming that low end components have shorter lifespan is ridiculous. You've never had computer equipment fail on you? I've had a lot of computer equipment. Failures I've had, ranked in order of most failures to least: keyboards power supplies hard drives fans monitors I've never had a CPU, memory, or mobo failure. Which is really kind of amazing. I did have a 3DFX board once, which failed after a couple years. Never bought another graphics card. The keyboards fail so often I keep a couple spares around. I buy cheap, bottom of the line equipment. I don't overclock them and I make sure there's plenty of airflow around the boxes. Same for me. Cheap hardware as well; and as standard as possible. I've never had any pure electronic failure (graphic card including)! I would just put fan & power supply before keyboard, and add mouse to the list just below keyboard. My keyboards do not break as often as yours: you must be a brutal guy ;-) An exception is for wireless keyboards and mice, which I quickly abandoned. Denis _________________ vita es estrany spir.wikidot.com Jan 13 2011 Sean Kelly <sean invisibleduck.org> writes: Walter Bright Wrote: I buy cheap, bottom of the line equipment. I don't overclock them and I make sure there's plenty of airflow around the boxes. I don't overclock any more after a weird experience I had overclocking an Athlon ages ago. It ran fine except that unzipping something always failed with a CRC error. Before that I expected that an overclocked CPU would either work or fail spectacularly. I'm not willing to risk data silently being corrupted in the background, particularly when even mid-range CPUs these days are more than enough for nearly everything. Jan 13 2011 "Nick Sabalausky" <a a.a> writes: "Walter Bright" <newshound2 digitalmars.com> wrote in message news:iglsge$2evs$1 digitalmars.com... Andrej Mitrovic wrote: On 1/12/11, Jean Crystof <news news.com> wrote: Claiming that low end components have shorter lifespan is ridiculous. You've never had computer equipment fail on you? I've had a lot of computer equipment. Failures I've had, ranked in order of most failures to least: keyboards power supplies hard drives fans monitors I've never had a CPU, memory, or mobo failure. Which is really kind of amazing. I did have a 3DFX board once, which failed after a couple years. Never bought another graphics card. The keyboards fail so often I keep a couple spares around. I buy cheap, bottom of the line equipment. I don't overclock them and I make sure there's plenty of airflow around the boxes. My failure list from most to least would be this: 1. power supply / printer 2. optical drive / floppies (the disks, not the drives) 3. hard drive 4. monitor / mouse / fan Never really had probems with anything else as far as I can remember. I had a few 3dfx cards back in the day and never had the slightest bit of trouble with any of them. I used to go through a ton of power supplies until I finally stopped buying the cheap ones. Printers kept giving me constant trouble, but the fairly modern HP I have now seems to work ok (although the OEM software/driver is complete and utter shit, but then OEM software usually is.) Jan 13 2011 Walter Bright <newshound2 digitalmars.com> writes: Nick Sabalausky wrote: My failure list from most to least would be this: 1. power supply / printer 2. optical drive / floppies (the disks, not the drives) 3. hard drive 4. monitor / mouse / fan Never really had probems with anything else as far as I can remember. I had a few 3dfx cards back in the day and never had the slightest bit of trouble with any of them. I used to go through a ton of power supplies until I finally stopped buying the cheap ones. Printers kept giving me constant trouble, but the fairly modern HP I have now seems to work ok (although the OEM software/driver is complete and utter shit, but then OEM software usually is.) My printer problems ended (mostly) when I finally spent the bux and got a laser printer. The (mostly) bit is because neither Windows nor Ubuntu support an HP 2300 printer. Sigh. Jan 13 2011 Robert Clipsham <robert octarineparrot.com> writes: On 14/01/11 03:53, Walter Bright wrote: My printer problems ended (mostly) when I finally spent the bux and got a laser printer. The (mostly) bit is because neither Windows nor Ubuntu support an HP 2300 printer. Sigh. Now this surprises me, printing has been the least painless thing I've ever encountered - it's the one area I'd say Linux excels. In OS X or Windows if I want to access my networked printer there's at least 5 clicks involved - on Linux there was a grand total of 0 - it detected my printer and installed it with no intervention from me, I just clicked print and it worked. Guess that's the problem with hardware though, it could have a few thousand good reviews and you could still manage to get something you run into endless issues with! -- Robert http://octarineparrot.com/ Jan 14 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 14.01.2011 16:48, schrieb Robert Clipsham: On 14/01/11 03:53, Walter Bright wrote: My printer problems ended (mostly) when I finally spent the bux and got a laser printer. The (mostly) bit is because neither Windows nor Ubuntu support an HP 2300 printer. Sigh. Now this surprises me, printing has been the least painless thing I've ever encountered - it's the one area I'd say Linux excels. In OS X or Windows if I want to access my networked printer there's at least 5 clicks involved - on Linux there was a grand total of 0 - it detected my printer and installed it with no intervention from me, I just clicked print and it worked. Guess that's the problem with hardware though, it could have a few thousand good reviews and you could still manage to get something you run into endless issues with! This really depends on your printer, some have good Linux support and some don't. Postscript-support (mostly seen in better Laser printers) is probably most painless (just supply a PPD - if CUPS doesn't have one for your printer anyway - and you're done). But also many newer inkjet printers have Linux support, but many need a proprietary library from the vendor to work. But a few years ago it was a lot worse, especially with cheap inkjets. Many supported only GDI printing which naturally is best supported on Windows (GDI is a windows interface). Jan 14 2011 Walter Bright <newshound2 digitalmars.com> writes: Daniel Gibson wrote: But a few years ago it was a lot worse, especially with cheap inkjets. Many supported only GDI printing which naturally is best supported on Windows (GDI is a windows interface). Yeah, but I bought an *HP* laserjet, because I thought everyone supported them well. Turns out I probably have the only orphaned HP LJ model. Jan 14 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 14.01.2011 20:50, schrieb Walter Bright: Daniel Gibson wrote: But a few years ago it was a lot worse, especially with cheap inkjets. Many supported only GDI printing which naturally is best supported on Windows (GDI is a windows interface). Yeah, but I bought an *HP* laserjet, because I thought everyone supported them well. Turns out I probably have the only orphaned HP LJ model. Yes, the HP Laserjets usually have really good support with PCL and sometimes even Postscript. You said you've got a HP (Laserjet?) 2300? On http://www.openprinting.org/printer/HP/HP-LaserJet_2300 it says that printer "works perfectly" and supports PCL 5e, PCL6 and Postscript level 3. Generally http://www.openprinting.org/printers is a really good page to see if a printer has Linux-support and where to get drivers etc. Cheers, - Daniel Jan 14 2011 Walter Bright <newshound2 digitalmars.com> writes: Daniel Gibson wrote: Am 14.01.2011 20:50, schrieb Walter Bright: Daniel Gibson wrote: But a few years ago it was a lot worse, especially with cheap inkjets. Many supported only GDI printing which naturally is best supported on Windows (GDI is a windows interface). Yeah, but I bought an *HP* laserjet, because I thought everyone supported them well. Turns out I probably have the only orphaned HP LJ model. Yes, the HP Laserjets usually have really good support with PCL and sometimes even Postscript. You said you've got a HP (Laserjet?) 2300? Yup. Do you want a picture? <g> On http://www.openprinting.org/printer/HP/HP-LaserJet_2300 it says that printer "works perfectly" and supports PCL 5e, PCL6 and Postscript level 3. Nyuk nyuk nyuk Generally http://www.openprinting.org/printers is a really good page to see if a printer has Linux-support and where to get drivers etc. Jan 14 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 14.01.2011 22:54, schrieb Walter Bright: Daniel Gibson wrote: Am 14.01.2011 20:50, schrieb Walter Bright: Daniel Gibson wrote: But a few years ago it was a lot worse, especially with cheap inkjets. Many supported only GDI printing which naturally is best supported on Windows (GDI is a windows interface). Yeah, but I bought an *HP* laserjet, because I thought everyone supported them well. Turns out I probably have the only orphaned HP LJ model. Yes, the HP Laserjets usually have really good support with PCL and sometimes even Postscript. You said you've got a HP (Laserjet?) 2300? Yup. Do you want a picture? <g> No, I believe you ;) On http://www.openprinting.org/printer/HP/HP-LaserJet_2300 it says that printer "works perfectly" and supports PCL 5e, PCL6 and Postscript level 3. Nyuk nyuk nyuk The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer - I don't know about your version, because Ubuntu doesn't list it anymore, but I'd be surprised if it didn't support it as well ;) hplips docs say that the printer is supported when connected via USB or "Network or JetDirect" (but not Parallel port, but probably the printer doesn't have one). It may be that Ubuntu doesn't install hplip (HPs driver for all kinds of printers - including the LaserJet 2300 ;)) by default. That could be fixed by "sudo apt-get install hplip hpijs-ppds" and then trying to add the printer again (if there's no Voodoo to do that automatically). Cheers, - Daniel Jan 14 2011 Walter Bright <newshound2 digitalmars.com> writes: Daniel Gibson wrote: The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer - I don't know about your version, 8.10 because Ubuntu doesn't list it anymore, but I'd be surprised if it didn't support it as well ;) hplips docs say that the printer is supported when connected via USB or "Network or JetDirect" (but not Parallel port, but probably the printer doesn't have one). The HP 2300D is parallel port. (The "D" stands for duplex, an extra cost option on the 2300.) It may be that Ubuntu doesn't install hplip (HPs driver for all kinds of printers - including the LaserJet 2300 ;)) by default. That could be fixed by "sudo apt-get install hplip hpijs-ppds" and then trying to add the printer again (if there's no Voodoo to do that automatically). How I installed the printer is I just, more or less at random, said it was a different HP laserjet, and then it worked. The duplex doesn't work, though, nor any of the other variety of special features it has. Jan 14 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 15.01.2011 01:23, schrieb Walter Bright: Daniel Gibson wrote: The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer - I don't know about your version, 8.10 because Ubuntu doesn't list it anymore, but I'd be surprised if it didn't support it as well ;) hplips docs say that the printer is supported when connected via USB or "Network or JetDirect" (but not Parallel port, but probably the printer doesn't have one). The HP 2300D is parallel port. (The "D" stands for duplex, an extra cost option on the 2300.) HP says[1] it has also got USB, if their docs are correct for your version (and the USB port is just somehow hidden) it may be worth a try :) Also, http://www.openprinting.org/printer/HP/HP-LaserJet_2300 links (under "Postscript") a PPD that supports duplex. CUPS supports adding a printer and providing a custom PPD. (In my experience Postscript printers do support the parallel port, you can even just cat a PS file to /dev/lp0 if it has the right format) However, *maybe* performance (especially for pictures) is not as great as with HPs own PCL. As a Bonus: There are generic Postscript driver for Windows as well, so with that PPD your Duplex may even work on Windows :) It may be that Ubuntu doesn't install hplip (HPs driver for all kinds of printers - including the LaserJet 2300 ;)) by default. That could be fixed by "sudo apt-get install hplip hpijs-ppds" and then trying to add the printer again (if there's no Voodoo to do that automatically). How I installed the printer is I just, more or less at random, said it was a different HP laserjet, and then it worked. The duplex doesn't work, though, nor any of the other variety of special features it has. Maybe CUPS didn't list the LJ2300 as supported because (according to that outdated list I found in the Ubuntu 8.04 driver) it isn't supported at the parport. [1] http://h10010.www1.hp.com/wwpc/us/en/sm/WF06b/18972-236251-236263-14638-f51-238800-238808-238809.html Jan 14 2011 Jean Crystof <news news.com> writes: Walter Bright Wrote: Daniel Gibson wrote: The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer - I don't know about your version, 8.10 This thread sure was interesting. Now what I'd like is if Walter could please try a Nvidia Geforce on Linux if the problems won't go away by upgrading his Ubuntu. Unfortunately that particular Ati graphics driver is constantly changing and it might take 1-2 years to make it work in Ubuntu: http://www.phoronix.com/vr.php?view=15614 The second thing is upgrading the Ubuntu. Telling how Linux sucks by using Ubuntu 8.10 is like telling how Windows 7 sucks when you're actually using Windows ME or 98. These have totally different software stacks, just to name a few: openoffice 2 vs 3 ext3 vs ext4 filesystem usb2 vs usb3 nowadays kde3 vs kde4 (kde4 in 8.10 was badly broken) gcc 4.3 vs 4.5 old style graphics drivers vs kernel mode switch faster bootup thousands of new features and drivers tens of thousands of bugfixes and so on. It makes no sense to discuss "Linux". It's constantly changing. Jan 14 2011 Jean Crystof <news news.com> writes: Jean Crystof Wrote: Walter Bright Wrote: Daniel Gibson wrote: The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer - I don't know about your version, 8.10 This thread sure was interesting. Now what I'd like is if Walter could please try a Nvidia Geforce on Linux if the problems won't go away by upgrading his Ubuntu. Unfortunately that particular Ati graphics driver is constantly changing and it might take 1-2 years to make it work in Ubuntu: http://www.phoronix.com/vr.php?view=15614 The second thing is upgrading the Ubuntu. Telling how Linux sucks by using Ubuntu 8.10 is like telling how Windows 7 sucks when you're actually using Windows ME or 98. These have totally different software stacks, just to name a few: openoffice 2 vs 3 ext3 vs ext4 filesystem usb2 vs usb3 nowadays kde3 vs kde4 (kde4 in 8.10 was badly broken) gcc 4.3 vs 4.5 old style graphics drivers vs kernel mode switch faster bootup thousands of new features and drivers tens of thousands of bugfixes and so on. It makes no sense to discuss "Linux". It's constantly changing. I tried to find the package lists for Ubuntu 8.10 (intrepid), but they're not online anymore. Using it is *totally* crazy. Do apt-get update and apt-get upgrade even work anymore? The Ubuntu idea was to provide a simple graphical tool for dist-upgrades. If I had designed it, I wouldn't even let you log in before upgrading. No wonder DMD binaries depended on legacy libraries some time ago. The compiler author should be using VAX or something similar like all dinosaurs do. Jan 14 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/14/11 6:48 PM, Jean Crystof wrote: Walter Bright Wrote: Daniel Gibson wrote: The hplip version in Ubuntu 8.04 and in Ubuntu 9.10 support your printer - I don't know about your version, 8.10 This thread sure was interesting. Now what I'd like is if Walter could please try a Nvidia Geforce on Linux if the problems won't go away by upgrading his Ubuntu. Unfortunately that particular Ati graphics driver is constantly changing and it might take 1-2 years to make it work in Ubuntu: http://www.phoronix.com/vr.php?view=15614 The second thing is upgrading the Ubuntu. Telling how Linux sucks by using Ubuntu 8.10 is like telling how Windows 7 sucks when you're actually using Windows ME or 98. These have totally different software stacks, just to name a few: openoffice 2 vs 3 ext3 vs ext4 filesystem usb2 vs usb3 nowadays kde3 vs kde4 (kde4 in 8.10 was badly broken) gcc 4.3 vs 4.5 old style graphics drivers vs kernel mode switch faster bootup thousands of new features and drivers tens of thousands of bugfixes and so on. It makes no sense to discuss "Linux". It's constantly changing. The darndest thing is I have Ubuntu 8.10 on my laptop with KDE 3.5 on top... and love it. But this all is exciting - I think I'll make the switch, particularly now that I have a working backup solution. Andrei Jan 14 2011 Walter Bright <newshound2 digitalmars.com> writes: Jean Crystof wrote: The second thing is upgrading the Ubuntu. Telling how Linux sucks by using Ubuntu 8.10 To be fair, it was about the process of upgrading in place to Ubuntu 8.10 that sucked. It broke everything, and made me leery of upgrading again. Jan 14 2011 Gour <gour atmarama.net> writes: On Fri, 14 Jan 2011 22:40:11 -0800 Walter Bright <newshound2 digitalmars.com> wrote: To be fair, it was about the process of upgrading in place to Ubuntu 8.10 that sucked. It broke everything, and made me leery of upgrading again. <shameful plugin> /me likes Archlinux - all the hardware work and there is no 'upgrade' process like in Ubuntu 'cause it is 'rolling release', iow. one can update whenever and as often one desire. Moreover, it's very simple to build from the source if one wants/needs. </shameful plugin> Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ---------------------------------------------------------------- Jan 14 2011 retard <re tard.com.invalid> writes: Fri, 14 Jan 2011 21:02:38 +0100, Daniel Gibson wrote: Am 14.01.2011 20:50, schrieb Walter Bright: Daniel Gibson wrote: But a few years ago it was a lot worse, especially with cheap inkjets. Many supported only GDI printing which naturally is best supported on Windows (GDI is a windows interface). Yeah, but I bought an *HP* laserjet, because I thought everyone supported them well. Turns out I probably have the only orphaned HP LJ model. Yes, the HP Laserjets usually have really good support with PCL and sometimes even Postscript. You said you've got a HP (Laserjet?) 2300? On http://www.openprinting.org/printer/HP/HP-LaserJet_2300 it says that printer "works perfectly" and supports PCL 5e, PCL6 and Postscript level 3. Generally http://www.openprinting.org/printers is a really good page to see if a printer has Linux-support and where to get drivers etc. I'm not sure if Walter's Ubuntu version already has this, but the latest Ubuntus automatically install all CUPS supported (USB) printers. I haven't tried this autodetection with parallel or network printers. The "easiest" way to configure CUPS is via the CUPS network interface ( http://localhost:631 ). In some early Ubuntu versions the printer configuration was broken. You had to add yourself to the lpadmin group and whatnot. My experiences with printers are: Linux (Ubuntu) 1. Plug in the cable 2. Print Mac OS X 1. Plug in the cable 2. Print Windows 1. Plug in the cable. 2. Driver wizard appears, fails to install 3. Insert driver cd (preferably download the latest drivers from the internet) 4. Save your work 4. Reboot 5. Close the HP/Canon/whatever ad dialog 6. Restart the programs and load your work 7. Print Jan 14 2011 Russel Winder <russel russel.org.uk> writes: On Fri, 2011-01-14 at 11:50 -0800, Walter Bright wrote: Daniel Gibson wrote: But a few years ago it was a lot worse, especially with cheap inkjets.= =20 Many supported only GDI printing which naturally is best supported on= =20 Windows (GDI is a windows interface). =20 Yeah, but I bought an *HP* laserjet, because I thought everyone supported= them well. =20 Turns out I probably have the only orphaned HP LJ model. I have an HP LJ 4000N and whilst it is perfectly functional, printing systems have decided it is too old to work with properly -- this is a Windows, Linux and Mac OS X problem. Backward compatibility is a three-edged sword. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 14 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 14.01.2011 21:16, schrieb Russel Winder: On Fri, 2011-01-14 at 11:50 -0800, Walter Bright wrote: Daniel Gibson wrote: But a few years ago it was a lot worse, especially with cheap inkjets. Many supported only GDI printing which naturally is best supported on Windows (GDI is a windows interface). Yeah, but I bought an *HP* laserjet, because I thought everyone supported them well. Turns out I probably have the only orphaned HP LJ model. I have an HP LJ 4000N and whilst it is perfectly functional, printing systems have decided it is too old to work with properly -- this is a Windows, Linux and Mac OS X problem. Backward compatibility is a three-edged sword. hplip on Linux should support it when connected via Parallel Port (but, according to a maybe outdated list, not USB or Network/Jetdirect). See also http://www.openprinting.org/printer/HP/HP-LaserJet_4000 :-) Jan 14 2011 Russel Winder <russel russel.org.uk> writes: On Sat, 2011-01-15 at 00:26 +0100, Daniel Gibson wrote: [ . . . ] hplip on Linux should support it when connected via Parallel Port (but,= =20 according to a maybe outdated list, not USB or Network/Jetdirect). See al= so=20 http://www.openprinting.org/printer/HP/HP-LaserJet_4000 :-) The problem is not the spooling per se, Linux, Windows and Mac OS X are all happy to talk to JetDirect, the problem is that the printer only has 7MB of memory and no disc, and operating systems seem now to think that printers have gigbytes of memory and make no allowances. The worst of it is though that the LJ 4000 has quite an old version of PostScript compared to that in use today and all the application and/or drivers that render to PostScript are not willing (or able) to code generate for such an old PostScript interpreter. Together this leads to a huge number of stack fails on print jobs. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 15 2011 retard <re tard.com.invalid> writes: Thu, 13 Jan 2011 19:04:59 -0500, Nick Sabalausky wrote: My failure list from most to least would be this: 1. power supply / printer 2. optical drive / floppies (the disks, not the drives) 3. hard drive 4. monitor / mouse / fan My list is pretty much the same. I bought a (Toshiba IIRC) dot matrix printer (the price was insane) in 1980s. It STILL works fine when printing ASCII text, but it's "a bit" noisy and slow. Another thing is, after upgrading from DOS, haven't found any drivers for printing graphics. On DOS, only some programs had specially crafted drivers for this printer and some had drivers for some other proprietary protocol the printer "emulates" :-) My second printer was some Canon LBP in the early 90s. STILL works without any problems (still connected to my Ubuntu CUPS server), but it's also relatively slow and physically huge. I used to replace the toner and drums, toner every ~2 years (prints 1500-3000 pages of 5% text) and drum every 5-6 years. We bought it as used from a company. It had been repaired once by the official Canon service. After that, almost 20 years without repair. I also bought a faster (USB!) laser printer from Brother couple of years ago. I've replaced the drum once and replaced the toner three times with some cheapo 3rd party stuff. It was a bit risky to buy a set of 10 toner kits along with the printers (even the laser printers are so cheap now), but it was an especially cheap offer and we thought the spare part prices go up anyway. The amortized printing costs are probably less than 3 cents per page. Now, I've also bought Canon, HP, and Epson inkjets. What can I say.. The printers are cheap. The ink is expensive. They're slow, and result looks like shit (not very photo-realistic) compared to the online printing services. AND I've "broken" about 8 of them in 15 years. It's way too expensive to start buying spare parts (e.g. when the dry ink gets stuck in the ink "tray" in Canon printers). Nowadays I print photos using some online service. The inkjet printer quality still sucks IMO. Don't buy them. PSUs: Never ever buy the cheap models. There's a list of bad manufacturers in the net. They make awful shit. The biggest problem is, if the PSU breaks, it might also break other parts which makes all PSU failures really expensive. I've bought <ad>Seasonic, Fortron, and Corsair</ad> PSUs since the late 1990s. They work perfectly. If some part fails, it's the PSU fan (or sometimes the fuse when switching the PSU on causes a surge). Fuses are cheap. Fans last much longer if you replace the engine oil every 2-4 years. Scrap off the sticker in the center of the fan and pour in appropriate oil. I'm not kidding! I've got one 300W PSU from 1998 and it still works and the fan is almost as quiet as if it was new. Optical drives: Number 1 reason for breakage, I forget to close the tray and kick it off! Currently I don't use internal optical drives anymore. There's one external dvd burner. I rarely use it. And it's safe from my feet on the table :D Hard drives: these always fail, sooner or later. There's nothing you can do except RAID and backups (labs.google.com/papers/disk_failures.pdf). I've successfully terminated all (except those in use) hard drives so far by using them normally. Monitors: The CRTs used to break every 3-5 years. Even the high quality Sony monitors :-| I've used TFT panels since 2003. The inverter of the first 14" TFT broke after 5 years of use. Three others are still working, after 1-6 years of use. Mice: I've always bought Logitech mice. NEVER had any failures. The current one is MX 510 (USB). Previous ones used the COM port. The bottom of the MX510 shows signs of hardcore use, but the internal parts haven't fallen off yet and the LED "eye" works :-D Fans: If you want reliability, buy fans with ball bearings. They make more noise than sleeve bearings. I don't believe in expensive high quality fans. Sure, there are differences in the airflow and noise levels, but the max reliability won't be any better. The normal PC stores don't sell any fans with industrial quality bearings. Like I said before, remember to replace the oil http://www.dansdata.com/fanmaint.htm -- I still have high quality fans from the 1980s in 24/7 use. The only problem is, I couldn't anticipate how much the power consumption grows. The old ones are 40-80 mm fans. Now (at least gaming) computers have 120mm or 140mm or even bigger fans. Jan 14 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/14/11, retard <re tard.com.invalid> wrote: Like I said before, remember to replace the oil http://www.dansdata.com/fanmaint.htm I've never thought of this. I did have a couple of failed fans over the years but I always had a bunch of spares from the older equipment which I've replaced. Still, that is a cool tip, thanks! And yes, avoid cheap PSU's or at least get one from a good manufacturer. It's also important to have a PSU that can actually power your PC. Jan 14 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 14.01.2011 15:21, schrieb retard: PSUs: Never ever buy the cheap models. Yup, one should never cheap out on PSUs. Also cheap PSUs usually are less efficient. Optical drives: Number 1 reason for breakage, I forget to close the tray and kick it off! Currently I don't use internal optical drives anymore. There's one external dvd burner. I rarely use it. And it's safe from my feet on the table :D If you don't trash them yourself (:P) optical drives sometimes fail because a rubber band in it that rotates the disk (or something) becomes brittle or worn after some years. These can usually be replaced. Hard drives: these always fail, sooner or later. There's nothing you can do except RAID and backups (labs.google.com/papers/disk_failures.pdf). I've successfully terminated all (except those in use) hard drives so far by using them normally. Not kicking/hitting your PC and cooling them appropriately helps, but in the end modern HDDs die anyway. I've had older (4GB) HDDs run for for over 10 years, much of the time even 24/7, without failing. Mice: I've always bought Logitech mice. NEVER had any failures. The current one is MX 510 (USB). Previous ones used the COM port. The bottom of the MX510 shows signs of hardcore use, but the internal parts haven't fallen off yet and the LED "eye" works :-D I often had mouse buttons failing in logitech mice. Sometimes I removed the corresponding switches in the mouse and soldered one from another old cheap mouse into it, which fixed it until it broke again.. Now I'm using microsoft mice and they seem more reliable so far. Fans: If you want reliability, buy fans with ball bearings. They make more noise than sleeve bearings. I don't believe in expensive high quality fans. Sure, there are differences in the airflow and noise levels, but the max reliability won't be any better. The normal PC stores don't sell any fans with industrial quality bearings. Like I said before, remember to replace the oil http://www.dansdata.com/fanmaint.htm -- I still have high quality fans from the 1980s in 24/7 use. The only problem is, I couldn't anticipate how much the power consumption grows. The old ones are 40-80 mm fans. Now (at least gaming) computers have 120mm or 140mm or even bigger fans. Thanks for the tip :-) Jan 14 2011 Walter Bright <newshound2 digitalmars.com> writes: Thanks for the fan info. I'm going to go oil my fans! Jan 14 2011 "Nick Sabalausky" <a a.a> writes: "retard" <re tard.com.invalid> wrote in message news:igpm5t$26so$1 digitalmars.com... Now, I've also bought Canon, HP, and Epson inkjets. What can I say.. The printers are cheap. The ink is expensive. They're slow, and result looks like shit (not very photo-realistic) compared to the online printing services. AND I've "broken" about 8 of them in 15 years. It's way too expensive to start buying spare parts (e.g. when the dry ink gets stuck in the ink "tray" in Canon printers). Nowadays I print photos using some online service. The inkjet printer quality still sucks IMO. Don't buy them. A long time ago we got, for free, an old Okidata printer that some school or company or something was getting rid of. It needed a new, umm, something really really expensive (I forget offhand), so there was a big black streak across each page. And it didn't do color. But I absolutely loved that printer. Aside from the black streak, everything about it worked flawlessly every time. *Never* jammed once, blazing fast, good quality. Used that thing for years. Eventually we did need something that could print without that streak and we went through a ton of inkjets. Every one of them was total shit until about 2 or 3 years ago we got an HP Photosmart C4200 printer/scanner combo which isn't as good as the old Okidata, but it's the only inkjet I've ever used that I'd consider "not shit". The software/drivers for it, though, still fall squarely into the "pure shit" category, though. Oh well...Maybe there's Linux drivers for it that are better... PSUs: Never ever buy the cheap models. There's a list of bad manufacturers in the net. They make awful shit. Another problem is that, as places like Sharky Extreme and Tom's Hardware found out while testing, it seems to be common practice for PSU manufacturers to outright lie about the wattage. Optical drives: Number 1 reason for breakage, I forget to close the tray and kick it off! Very much related to that: I truly, truly *hate* all software that decides it makes sense to eject the tray directly. And even worse: OSes not having a universal setting for "Disable *all* software-triggered ejects". That option should be standard and default. I've seriously tried to learn how to make Windows rootkits *just* so I could hook into the right dll/function and disable it system-wide once-and-for-all. (Never actually got anywhere with it though, and eventually just gave up.) Hard drives: these always fail, sooner or later. There's nothing you can do except RAID and backups And SMART monitors: I've had a total of two HDD's fail, and in both cases I really lucked out. The first one was in my Mac, but it was after I was already getting completely fed up with OSX and Apple, so I didn't really care much - I was mostly back on Windows again by that point. The second failure just happened to be the least important of the three HDDs in my system. I was still pretty upset about it though, so it was a big wakeup call: I *will not* have a primary system anymore that doesn't have a SMART monitoring program, with temperature readouts, always running. And yes, it can't always predict a failure, but sometimes it can so IMO there's no good reason not to have it. That's actually one of the things I don't like about Linux, nothing like that seems to exist for Linux. Sure, there's a cmd line program you can poll, but that doesn't remotely cut it. Monitors: The CRTs used to break every 3-5 years. Even the high quality Sony monitors :-| I've used TFT panels since 2003. The inverter of the first 14" TFT broke after 5 years of use. Three others are still working, after 1-6 years of use. I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution), and for a long time I've always had either a dual-monitor setup or dual systems with one monitor on each, so I've had a lot of monitors. But I've only ever had *one* CRT go bad, and I definitely use them for more than 5 years. Also, FWIW, I'm convinced that Sony is *not* as good as people generally think. Maybe they were in the 70's or 80's, I don't know, but they're frequently no better than average. It's common for their high end DVD players to have problems or limitations that the cheap bargain-brands (like CyberHome) don't have. I had an expensive portable Sony CD player and the buttons quickly crapped out rendering it unusable (not that I care anymore since I have a Toshiba Gigabeat F with the Rockbox firmware - iPod be damned). The PS2 was reining champ for "most unreliable video game hardware in history" until the 360 stole the title by a landslide. And I've *never* found a pair of Sony headphones that sounded even *remotely* as good as a pair from Koss of comparable price and model. Sony is the Buick/Catallac/Oldsmobile of consumer electronics, *not* the Lexus/Benz as most people seem to think. Mice: I've always bought Logitech mice. NEVER had any failures. The current one is MX 510 (USB). Previous ones used the COM port. The bottom of the MX510 shows signs of hardcore use, but the internal parts haven't fallen off yet and the LED "eye" works :-D MS and Logitech mice are always the best. I've never come across any other brand that put out anything but garbage (that does include Apple, except that in Apple's case it's because of piss-poor design rather than the piss-poor engineering of all the other non-MS/Logitech brands). I've been using this Logitech Trackball for probably over five years and I absolutely love it: http://www.amazon.com/Logitech-Trackman-Wheel-Optical-Silver/dp/B00005NIMJ/ In fact, I have two of them. The older one has been starting to get a bad connection between the cord and the trackball, but that's probably my fault. And heck, the MS mouse my mom uses has left-button that's been acting up, so nothing's perfect no matter what brand. But MS/Logitech are definitely still worlds ahead of anyone else. (Which is kind of weird because, along with keyboards, mice are the *only* hardware I trust MS with. Every other piece of MS hardware either has reliability problems or, in the case of all their game controllers going all they way back to the Sidewinders in the pre-XBox days, a completely worthless D-Pad.) Jan 15 2011 retard <re tard.com.invalid> writes: Sat, 15 Jan 2011 03:23:41 -0500, Nick Sabalausky wrote: "retard" <re tard.com.invalid> wrote in message PSUs: Never ever buy the cheap models. There's a list of bad manufacturers in the net. They make awful shit. Another problem is that, as places like Sharky Extreme and Tom's Hardware found out while testing, it seems to be common practice for PSU manufacturers to outright lie about the wattage. That's true. But it's also true that PSU efficiency and power have improved drastically. And their quality overall. In 1990s it was pretty common that computer stores mostly sold those shady brands with a more or less lethal design. There are lots of reliable brands now. If you're not into gaming, it hardly matters which (good) PSU you buy. They all provide 300+ Watts and your system might consume 70-200 Watts, even under full load. Monitors: The CRTs used to break every 3-5 years. Even the high quality Sony monitors :-| I've used TFT panels since 2003. The inverter of the first 14" TFT broke after 5 years of use. Three others are still working, after 1-6 years of use. I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution), and for a long time I've always had either a dual-monitor setup or dual systems with one monitor on each, so I've had a lot of monitors. But I've only ever had *one* CRT go bad, and I definitely use them for more than 5 years. Also, FWIW, I'm convinced that Sony is *not* as good as people generally think. Maybe they were in the 70's or 80's, I don't know, but they're frequently no better than average. I've disassembled couple of CRT monitors. The Sony monitors have had aluminium cased "modules" inside them. So replacing these should be relatively easy. They also had detachtable wires between these units. Cheaper monitors have three circuit boards (one for the front panel, one in the back of the tube and one in the bottom). It's usually the board in the bottom of the monitor that breaks, which means that you need to cut all wires to remove it in cheaper monitors. It's just this high level design that I like in Sony's monitors. Probably other high quality brands like Eizo also do this. Sony may also use bad quality discrete components like capacitors and ICs. I can't say anything about that. Jan 15 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/15/11 2:23 AM, Nick Sabalausky wrote: I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution) I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs. Andrei Jan 15 2011 "Nick Sabalausky" <a a.a> writes: "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message news:igt2pl$2u6e$1 digitalmars.com... On 1/15/11 2:23 AM, Nick Sabalausky wrote: I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution) I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs. Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid$25 for in the first place), or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues. Jan 15 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Saturday 15 January 2011 19:11:26 Nick Sabalausky wrote: "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message news:igt2pl$2u6e$1 digitalmars.com... On 1/15/11 2:23 AM, Nick Sabalausky wrote: I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution) I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs. Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid $25 for in the first place), or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues. Why would you _want_ more than one resolution? What's the use case? I'd expect that you'd want the highest resolution that you could get and be done with it. - Jonathan M Davis Jan 15 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 16.01.2011 04:33, schrieb Jonathan M Davis: On Saturday 15 January 2011 19:11:26 Nick Sabalausky wrote: "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org> wrote in message news:igt2pl$2u6e$1 digitalmars.com... On 1/15/11 2:23 AM, Nick Sabalausky wrote: I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution) I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs. Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid$25 for in the first place), or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues. Why would you _want_ more than one resolution? What's the use case? I'd expect that you'd want the highest resolution that you could get and be done with it. - Jonathan M Davis Maybe for games (if your PC isn't fast enough for full resolution or the game doesn't support it).. but that is no problem at all: flatscreens can interpolate other resolutions and while the picture may not be good enough for text (like when programming) and stuff it *is* good enough for games on decent flatscreens. For non-games-usage I never had the urge to change the resolution of my flatscreens. And I really prefer them to any CRT I've ever used. OTOH when he has a good CRT (high resolution, good refresh rate) there may be little reason to replace it, as long as it's working.. apart from the high power consumption and the size maybe. Cheers, - Daniel Jan 15 2011 Walter Bright <newshound2 digitalmars.com> writes: Daniel Gibson wrote: OTOH when he has a good CRT (high resolution, good refresh rate) there may be little reason to replace it, as long as it's working.. apart from the high power consumption and the size maybe. The latter two issues loomed large for me. I was very glad to upgrade to an LCD. Jan 15 2011 "Nick Sabalausky" <a a.a> writes: "Daniel Gibson" <metalcaedes gmail.com> wrote in message news:igtq08$2m1c$1 digitalmars.com... Am 16.01.2011 04:33, schrieb Jonathan M Davis: On Saturday 15 January 2011 19:11:26 Nick Sabalausky wrote: "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org> wrote in message news:igt2pl$2u6e$1 digitalmars.com... On 1/15/11 2:23 AM, Nick Sabalausky wrote: I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution) I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs. Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid $25 for in the first place), or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues. Why would you _want_ more than one resolution? What's the use case? I'd expect that you'd want the highest resolution that you could get and be done with it. - Jonathan M Davis Maybe for games (if your PC isn't fast enough for full resolution or the game doesn't support it).. but that is no problem at all: flatscreens can interpolate other resolutions and while the picture may not be good enough for text (like when programming) and stuff it *is* good enough for games on decent flatscreens. There's two reasons it's good for games: 1. Like you indicated, to get a better framerate. Framerate is more important in most games than resolution. 2. For games that aren't really designed for multiple resolutions, particularly many 2D ones, and especially older games (which are often some of the best, but they look like shit on an LCD). For non-games-usage I never had the urge to change the resolution of my flatscreens. And I really prefer them to any CRT I've ever used. For non-games, just off-the-top-of-my-head: Bumping up to a higher resolution can be good when dealing with images, or whenever you're doing anything that could use more screen real-estate at the cost of smaller UI elements. And CRTs are more likely to go up to really high resolutions than non-CRTs. For instance, 1600x1200 is common on even the low-end CRT monitors (and that was true even *before* televisions started going HD - which is *still* lower-rez than 1600x1200). Yea, you can get super high resolution non-CRTs, but they're much more expensive. And even then, you lose the ability to do any real desktop work at a more typical resolution. Which is bad because for many things I do want to limit my resolution so the UI isn't overly-small. And yea, there are certian things you can do to scale up the UI, but I've never seen an OS, Win/Lin/Mac, that actually handled that sort of thing reasonably well. So CRTs give you all that flexibility at a sensible price. And if I'm doing some work on the computer, and it *is* set at a sensible resolution that works for both the given monitor and the task at hand, I've never noticed a real impromevent with LCD versus CRT. Yea, it is a *little* bit better, but I've never noticed any difference while actually *doing* anything on a computer: only when I stop and actually look for differences. Also, it can be good when mirroring the display to TV-out or, better yet, using the "cinema mode" where any video-playback is sent fullscreen to the TV (which I'll often do), because those things tend to not work very well when the monitor isn't reduced to the same resolution as the TV. OTOH when he has a good CRT (high resolution, good refresh rate) there may be little reason to replace it, as long as it's working.. apart from the high power consumption and the size maybe. I've actually compared the rated power consumpsion between CRTs and LCDs of similar size and was actually surprised to find that there was little, if any, real difference at all on the sets I compared. Jan 15 2011 "Nick Sabalausky" <a a.a> writes: "Nick Sabalausky" <a a.a> wrote in message news:igttbt$16hu$1 digitalmars.com... OTOH when he has a good CRT (high resolution, good refresh rate) there may be little reason to replace it, as long as it's working.. apart from the high power consumption and the size maybe. I've actually compared the rated power consumpsion between CRTs and LCDs of similar size and was actually surprised to find that there was little, if any, real difference at all on the sets I compared. As for size, well, I have enough space, so at least for me that's a non-issue. Jan 15 2011 Adam Ruppe <destructionator gmail.com> writes: I stuck with my CRT for a long time. What I really liked about it was the bright colors. I've never seen an LCD match that. But, my CRT started to give out. It'd go to a bright line in the middle and darkness everywhere else at random. It started doing it just every few hours, then it got to the point where it'd do it every 20 minutes or so. I found if I give it a nice pound on the side, it'd go back to normal for a while. I was content for that for months. ... but the others living with me weren't. *WHACK* OH MY GOD JUST BUY A NEW ONE ALREADY! So I gave in and looked for a replacement CRT with the same specs. But couldn't find one. I gave in and bought an LCD in the same price range (~$150) with the same resolution (I liked what I had!) Weighed less, left room on the desk for my keyboard, and best of all, I haven't had to hit it yet. But colors haven't looked quite the same since and VGA text mode just looks weird. Alas. Jan 16 2011 Walter Bright <newshound2 digitalmars.com> writes: Nick Sabalausky wrote: And CRTs are more likely to go up to really high resolutions than non-CRTs. For instance, 1600x1200 is common on even the low-end CRT monitors (and that was true even *before* televisions started going HD - which is *still* lower-rez than 1600x1200). Yea, you can get super high resolution non-CRTs, but they're much more expensive. I have 1900x1200 on LCD, and I think it was around $325. It's a Hanns-G thing, from Amazon. Of course, I don't use it for games. I got thoroughly burned out on that when I had a job in college developing/testing them. Jan 15 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: For games? I just switch to software rendering. I get almost the same quality as a CRT on low resolutions. It's still not perfect, but it's close. Soo.. what are you playing that needs low resolutions and high framerates, Nick? Quake? :D Jan 15 2011 retard <re tard.com.invalid> writes: Sat, 15 Jan 2011 23:47:09 -0500, Nick Sabalausky wrote: Bumping up to a higher resolution can be good when dealing with images, or whenever you're doing anything that could use more screen real-estate at the cost of smaller UI elements. And CRTs are more likely to go up to really high resolutions than non-CRTs. For instance, 1600x1200 is common on even the low-end CRT monitors (and that was true even *before* televisions started going HD - which is *still* lower-rez than 1600x1200). The standard resolution for new flat panels has been 1920x1080 or 1920x1200 for a long time now. The panel size has slowly improved from 12-14" to 21.5" and 24", the price has gone down to about$110-120. Many of the applications have been tuned for 1080p. When I abandoned CRTs, the most common size was 17" or 19". Those monitors indeed supported resolutions up to 1600x1200 or more. However the best resolution was about 1024x768 or 1280x1024 for 17" monitors and 1280x1024 or a step up for 19" monitors. I also had one 22" or 23" Sony monitor which had the optimal resolution of 1600x1200 or at most one step bigger. It's much less than what the low-end models offer now. It's hard to believe you're using anything larger than 1920x1200 because the legacy graphics cards don't support very high resolutions, especially via DVI. For example I recently noticed a top of the line Geforce 6 card only supports resolutions up to 2048x1536 85 Hz. Guess how it works with a 30" Cinema display HD  2560x1600. Another thing is subpixel antialiasing. You can't really do it without a TFT panel and digital video output. Yea, you can get super high resolution non-CRTs, but they're much more expensive. And even then, you lose the ability to do any real desktop work at a more typical resolution. Which is bad because for many things I do want to limit my resolution so the UI isn't overly-small. And yea, there are certian things you can do to scale up the UI, but I've never seen an OS, Win/Lin/Mac, that actually handled that sort of thing reasonably well. So CRTs give you all that flexibility at a sensible price. You mean DPI settings? Also, it can be good when mirroring the display to TV-out or, better yet, using the "cinema mode" where any video-playback is sent fullscreen to the TV (which I'll often do), because those things tend to not work very well when the monitor isn't reduced to the same resolution as the TV. But my TV happily accepts 1920x1080? Sending the same digital signal to both works fine here. YMMV OTOH when he has a good CRT (high resolution, good refresh rate) there may be little reason to replace it, as long as it's working.. apart from the high power consumption and the size maybe. I've actually compared the rated power consumpsion between CRTs and LCDs of similar size and was actually surprised to find that there was little, if any, real difference at all on the sets I compared. How much do the CRTs consume power? The max power consumption for LED powered panels has gone down considerably and you never use their max brightness. Typical power consumption of a modern 21.5" panel might stay between 20 and 30 Watts when you're just typing text. Jan 16 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: I need to get a better LCD/LED display one of these days. Right now I'm sporting a Samsung 2232BW, it's a 22" screen with a native 1680x1050 resolution (16:10). But it has horrible text rendering when antialiasing is enabled. I've tried a bunch of screen calibration software, changing DPI settings, but nothing worked. I know it's not my eyes to blame since antialised fonts look perfectly fine for me on a few laptops that I've seen. Jan 16 2011 Russel Winder <russel russel.org.uk> writes: On Sun, 2011-01-16 at 16:55 +0100, Andrej Mitrovic wrote: I need to get a better LCD/LED display one of these days. Right now I'm sporting a Samsung 2232BW, it's a 22" screen with a native 1680x1050 resolution (16:10). But it has horrible text rendering when antialiasing is enabled. I've tried a bunch of screen calibration software, changing DPI settings, but nothing worked. I know it's not my eyes to blame since antialised fonts look perfectly fine for me on a few laptops that I've seen. It may not be the monitor, it may be the operating system setting. In particular what level of smoothing and hinting do you have set for the fonts on LCD screen? Somewhat counter-intuitively, font rendering gets worse if you have no hinting or you have full hinting. It is much better to set "slight hinting". Assuming you have sub-pixel smoothing set of course. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 16 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/16/11, Russel Winder <russel russel.org.uk> wrote: It may not be the monitor, it may be the operating system setting. In particular what level of smoothing and hinting do you have set for the fonts on LCD screen? Somewhat counter-intuitively, font rendering gets worse if you have no hinting or you have full hinting. It is much better to set "slight hinting". Assuming you have sub-pixel smoothing set of course. Yes, I know about those. Linux has arguably more settings to choose from, but it didn't help out. There's also RGB>BGR>GBR switches and contrast settings, and the ones you've mentioned like font hinting. It just doesn't seem to work on this screen no matter what I choose. Also, this screen has very poor yellows. When you have a solid yellow picture displayed you can actually see the color having a gradient from a darkish yellow to very bright yellow (almost white) from the top to the bottom of the screen, without even moving your head. But I bought this screen because it was rather cheap at the time and it's pretty good for games, which is what I cared for a few years ago. (low input lag + no tearing, no blurry screen when moving rapidly). I've read a few forum posts around the web and it seems other people have problems with this model and antialising as well. I'll definitely look into buying a quality screen next time though. Jan 16 2011 "Nick Sabalausky" <a a.a> writes: "retard" <re tard.com.invalid> wrote in message news:igv3p3$2n4k$2 digitalmars.com... Sat, 15 Jan 2011 23:47:09 -0500, Nick Sabalausky wrote: Yea, you can get super high resolution non-CRTs, but they're much more expensive. And even then, you lose the ability to do any real desktop work at a more typical resolution. Which is bad because for many things I do want to limit my resolution so the UI isn't overly-small. And yea, there are certian things you can do to scale up the UI, but I've never seen an OS, Win/Lin/Mac, that actually handled that sort of thing reasonably well. So CRTs give you all that flexibility at a sensible price. You mean DPI settings? I just mean uniformly scaled UI elements. For instance, you can usually adjust a UI's font size, but the results tend to be like selectively scaling up just the nose, mouth and hands on a picture of a human. And then parts of it still end up too small. And, especially on Linux, those sorts of settings doesn't always get obeyed by all software anyway. Also, it can be good when mirroring the display to TV-out or, better yet, using the "cinema mode" where any video-playback is sent fullscreen to the TV (which I'll often do), because those things tend to not work very well when the monitor isn't reduced to the same resolution as the TV. But my TV happily accepts 1920x1080? Sending the same digital signal to both works fine here. YMMV Mine's an SD...which I suppose I have to defend...Never felt a need to replace it. Never cared whether or not I could see athlete's drops of sweat or individual blades of grass. And I have a lot of SD content that's never going to magcally change to HD, and that stuff looks far better on an SD set anyway than on any HD set I've ever seen no matter what fancy delay-introducing filter it had (except maybe the CRT HDTVs that don't exist anymore). Racing games, FPSes and Pikmin are the *only* things for which I have any interest in HD (since, for those, it actually matters if you're able to see small things in the distance). But then I'd be spending money (which I'm very short on), and making all my other SD content look worse, *AND* since I'm into games, it would be absolutely essential to get one without any input->display lag, which is very difficult since 1. The manufacturers only seem to care about movies and 2. From what I've seen, they never seem to actually tell you how much lag there is. So it's a big bother, costs money, and has drawbacks. Maybe someday (like when I get rich and the downsides improve) but not right now. Jan 16 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/15/11 10:47 PM, Nick Sabalausky wrote: "Daniel Gibson"<metalcaedes gmail.com> wrote in message news:igtq08$2m1c$1 digitalmars.com... There's two reasons it's good for games: 1. Like you indicated, to get a better framerate. Framerate is more important in most games than resolution. 2. For games that aren't really designed for multiple resolutions, particularly many 2D ones, and especially older games (which are often some of the best, but they look like shit on an LCD). It's a legacy issue. Clearly everybody except you is using CRTs for gaming and whatnot. Therefore graphics hardware producers and game vendors are doing what it takes to adapt to a fixed resolution. For non-games-usage I never had the urge to change the resolution of my flatscreens. And I really prefer them to any CRT I've ever used. For non-games, just off-the-top-of-my-head: Bumping up to a higher resolution can be good when dealing with images, or whenever you're doing anything that could use more screen real-estate at the cost of smaller UI elements. And CRTs are more likely to go up to really high resolutions than non-CRTs. For instance, 1600x1200 is common on even the low-end CRT monitors (and that was true even *before* televisions started going HD - which is *still* lower-rez than 1600x1200). Yea, you can get super high resolution non-CRTs, but they're much more expensive. And even then, you lose the ability to do any real desktop work at a more typical resolution. Which is bad because for many things I do want to limit my resolution so the UI isn't overly-small. And yea, there are certian things you can do to scale up the UI, but I've never seen an OS, Win/Lin/Mac, that actually handled that sort of thing reasonably well. So CRTs give you all that flexibility at a sensible price. It's odd how everybody else can put up with LCDs for all kinds of work. And if I'm doing some work on the computer, and it *is* set at a sensible resolution that works for both the given monitor and the task at hand, I've never noticed a real impromevent with LCD versus CRT. Yea, it is a *little* bit better, but I've never noticed any difference while actually *doing* anything on a computer: only when I stop and actually look for differences. Meanwhile, you are looking at a gamma gun shooting atcha. Also, it can be good when mirroring the display to TV-out or, better yet, using the "cinema mode" where any video-playback is sent fullscreen to the TV (which I'll often do), because those things tend to not work very well when the monitor isn't reduced to the same resolution as the TV. OTOH when he has a good CRT (high resolution, good refresh rate) there may be little reason to replace it, as long as it's working.. apart from the high power consumption and the size maybe. I've actually compared the rated power consumpsion between CRTs and LCDs of similar size and was actually surprised to find that there was little, if any, real difference at all on the sets I compared. Absolutely. There's a CRT brand that consumes surprisingly close to an LCD. It's called "Confirmation Bias". Andrei Jan 16 2011 "Nick Sabalausky" <a a.a> writes: "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message news:igvhj9$mri$1 digitalmars.com... On 1/15/11 10:47 PM, Nick Sabalausky wrote: There's two reasons it's good for games: 1. Like you indicated, to get a better framerate. Framerate is more important in most games than resolution. 2. For games that aren't really designed for multiple resolutions, particularly many 2D ones, and especially older games (which are often some of the best, but they look like shit on an LCD). It's a legacy issue. Clearly everybody except you is using CRTs for gaming and whatnot. Therefore graphics hardware producers and game vendors are doing what it takes to adapt to a fixed resolution. Wow, you really seem to be taking a lot of this personally. First, I asume you meant "...everybody except you is using non-CRTs..." Second, how exacty is the modern-day work of graphics hardware producers and game vendors that you speak of going to affect games from more than a few years ago? What?!? You're still watching movies that were filmed in the 80's?!? Dude, you need to upgrade!!! It's odd how everybody else can put up with LCDs for all kinds of work. Strawman. I never said anything remotely resembling "LCDs are unusable." What I've said is that 1. They have certain benefits that get overlooked, and 2. Why should *I* spend the money to replace something that already works fine for me? And if I'm doing some work on the computer, and it *is* set at a sensible resolution that works for both the given monitor and the task at hand, I've never noticed a real impromevent with LCD versus CRT. Yea, it is a *little* bit better, but I've never noticed any difference while actually *doing* anything on a computer: only when I stop and actually look for differences. Meanwhile, you are looking at a gamma gun shooting atcha. You can't see anything at all without electromagnetic radiation shooting I've actually compared the rated power consumpsion between CRTs and LCDs of similar size and was actually surprised to find that there was little, if any, real difference at all on the sets I compared. Absolutely. There's a CRT brand that consumes surprisingly close to an LCD. It's called "Confirmation Bias". I'm pretty sure I did point out the limitations of my observation: "...on all the sets I compared". And it's pretty obvious I wasn't undertaking a proper extensive study. There's no need for sarcasm. Jan 16 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/16/11 2:07 PM, Nick Sabalausky wrote: "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org> wrote in message news:igvhj9$mri$1 digitalmars.com... On 1/15/11 10:47 PM, Nick Sabalausky wrote: There's two reasons it's good for games: 1. Like you indicated, to get a better framerate. Framerate is more important in most games than resolution. 2. For games that aren't really designed for multiple resolutions, particularly many 2D ones, and especially older games (which are often some of the best, but they look like shit on an LCD). It's a legacy issue. Clearly everybody except you is using CRTs for gaming and whatnot. Therefore graphics hardware producers and game vendors are doing what it takes to adapt to a fixed resolution. Wow, you really seem to be taking a lot of this personally. Not at all! First, I asume you meant "...everybody except you is using non-CRTs..." Second, how exacty is the modern-day work of graphics hardware producers and game vendors that you speak of going to affect games from more than a few years ago? What?!? You're still watching movies that were filmed in the 80's?!? Dude, you need to upgrade!!! You have a good point if playing vintage games is important to you. It's odd how everybody else can put up with LCDs for all kinds of work. Strawman. I never said anything remotely resembling "LCDs are unusable." What I've said is that 1. They have certain benefits that get overlooked, The benefits of CRTs are not being overlooked. They are insignificant or illusory. If they were significant, CRTs would still be in significant use. Donning a flat panel is not a display of social status. Most people need computers to get work done, and they'd use CRTs if CRTs would have them do better work. A 30" 2560x1280 monitor is sitting on my desk. (My employer bought it for me without asking; I "only" had a 26". They thought making me more productive at the cost of a monitor is simple business sense.) My productivity would be seriously impaired if I replaced either monitor with even the best CRT out there. and 2. Why should *I* spend the money to replace something that already works fine for me? If it works for you, fine. I doubt you wouldn't be more productive with a larger monitor. But at any rate entering money as an essential part of the equation is (within reason) misguided. This is your livelihood, your core work. Save on groceries, utilities, cars, luxury... but don't "save" on what impacts your real work. And if I'm doing some work on the computer, and it *is* set at a sensible resolution that works for both the given monitor and the task at hand, I've never noticed a real impromevent with LCD versus CRT. Yea, it is a *little* bit better, but I've never noticed any difference while actually *doing* anything on a computer: only when I stop and actually look for differences. Meanwhile, you are looking at a gamma gun shooting atcha. You can't see anything at all without electromagnetic radiation shooting Nonono. Gamma = electrons. CRT monitors have what's literally called a gamma gun. It's aimed straight at your eyes. Absolutely. There's a CRT brand that consumes surprisingly close to an LCD. It's called "Confirmation Bias". I'm pretty sure I did point out the limitations of my observation: "...on all the sets I compared". And it's pretty obvious I wasn't undertaking a proper extensive study. There's no need for sarcasm. There is. It would take anyone two minutes of online research to figure that your comparison is wrong. Andrei Jan 16 2011 so <so so.do> writes: You have a good point if playing vintage games is important to you. He was quite clear on that i think, this is not like natural selection. I don't know Nick, but like the new generation movies, new generation games mostly suck. If i had to, i would definitely pick the old ones for both of them. The benefits of CRTs are not being overlooked. They are insignificant or illusory. If they were significant, CRTs would still be in significant use. Donning a flat panel is not a display of social status. Most people need computers to get work done, and they'd use CRTs if CRTs would have them do better work. Well you can't value things like that, you know better than that. It is not just about how significant or insignificant? How is it watching things in only one angle? How is it reading a text, or i should say trying to read? How about colors or refresh rate? Yes, LCD has its own benefits too, and quite a bit of them. You forget the biggest factor, cost, for both user and the mainly producer. Jan 16 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/16/11 1:38 PM, Andrei Alexandrescu wrote: On 1/15/11 10:47 PM, Nick Sabalausky wrote: "Daniel Gibson"<metalcaedes gmail.com> wrote in message news:igtq08$2m1c$1 digitalmars.com... There's two reasons it's good for games: 1. Like you indicated, to get a better framerate. Framerate is more important in most games than resolution. 2. For games that aren't really designed for multiple resolutions, particularly many 2D ones, and especially older games (which are often some of the best, but they look like shit on an LCD). It's a legacy issue. Clearly everybody except you is using CRTs for gaming and whatnot. s/is using/is not using/ Andrie Jan 16 2011 Walter Bright <newshound2 digitalmars.com> writes: Andrei Alexandrescu wrote: Meanwhile, you are looking at a gamma gun shooting atcha. I always worried about that. Nobody actually found anything wrong, but still. Jan 16 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: With CRTs I could spend a few hours in front of the PC, but after that my eyes would get really tired and I'd have to take a break. Since I switched to LCDs I've never had this problem anymore, I could spend a day staring at screen if I wanted to. Of course, it's still best to take some time off regardless of the screen type. Anyway.. how about that Git thing, then? :D Jan 16 2011 "Nick Sabalausky" <a a.a> writes: "Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message news:mailman.652.1295210795.4748.digitalmars-d puremagic.com... With CRTs I could spend a few hours in front of the PC, but after that my eyes would get really tired and I'd have to take a break. Since I switched to LCDs I've never had this problem anymore, I could spend a day staring at screen if I wanted to. Of course, it's still best to take some time off regardless of the screen type. I use a light-on-dark color scheme. Partly because I like the way it looks, but also partly because it's easier on my eyes. If I were using a scheme with blazing-white everywhere, I can imagine a CRT might be a bit harsh. Anyway.. how about that Git thing, then? :D I'd been holding on to SVN for a while, but that discussion did convince me to give DVCSes an honest try (haven't gotten around to it yet though, but plan to). Jan 16 2011 Walter Bright <newshound2 digitalmars.com> writes: Andrej Mitrovic wrote: With CRTs I could spend a few hours in front of the PC, but after that my eyes would get really tired and I'd have to take a break. Since I switched to LCDs I've never had this problem anymore, I could spend a day staring at screen if I wanted to. Of course, it's still best to take some time off regardless of the screen type. I need reading glasses badly, but fortunately not for reading a screen. I never had eye fatigue problems with it. I did buy a 28" LCD for my desktop, which is so nice that I can no longer use my laptop screen for dev. :-( Anyway.. how about that Git thing, then? :D We'll be moving dmd, phobos, druntime, and the docs to Github shortly. The accounts are set up, it's just a matter of getting the svn repositories moved and figuring out how it all works. I know very little about git and github, but the discussions about it here and elsewhere online have thoroughly convinced me (and the other devs) that this is the right move for D. Jan 16 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Sunday 16 January 2011 14:07:57 Walter Bright wrote: Andrej Mitrovic wrote: Anyway.. how about that Git thing, then? :D We'll be moving dmd, phobos, druntime, and the docs to Github shortly. The accounts are set up, it's just a matter of getting the svn repositories moved and figuring out how it all works. I know very little about git and github, but the discussions about it here and elsewhere online have thoroughly convinced me (and the other devs) that this is the right move for D. Great! That will make it _much_ easier to make check-ins while working on other stuff in parallel. That's a royal pain with svn, and while it's slightly better when using git-svn to talk to an svn repository, it isn't much better, because the git branching stuff doesn't understand that you can't reorder commits to svn, so you can't merge in branches after having committed to the svn repository. But having it be pure git fixes all of that. So, this is great news. And I don't think that there's anything wrong with being a bit slow about the transition if taking our time means that we get it right, though obviously, the sooner we transition over, the sooner we get the benefits. - Jonathan M Davis Jan 16 2011 Walter Bright <newshound2 digitalmars.com> writes: Jonathan M Davis wrote: That will make it _much_ easier to make check-ins while working on other stuff in parallel. Yes. And there's the large issue that being on github simply makes contributing to the D project more appealing to a wide group of excellent developers. Jan 16 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 17.01.2011 06:12, schrieb Walter Bright: Jonathan M Davis wrote: That will make it _much_ easier to make check-ins while working on other stuff in parallel. Yes. And there's the large issue that being on github simply makes contributing to the D project more appealing to a wide group of excellent developers. How will the licensing issue (forks of the dmd backend are only allowed with your permission) be solved? Jan 16 2011 Walter Bright <newshound2 digitalmars.com> writes: Daniel Gibson wrote: How will the licensing issue (forks of the dmd backend are only allowed with your permission) be solved? It shouldn't be a problem as long as those forks are for the purpose of developing patches to the main branch, as is done now in svn. I view it like Using the back end to develop a separate compiler, or set oneself up as a distributor of dmd, incorporate it into some other product, etc., please ask for permission. Basically, anyone using it has to agree not to sue Symantec or Digital Mars, and conform to: Jan 16 2011 Robert Clipsham <robert octarineparrot.com> writes: On 17/01/11 06:25, Walter Bright wrote: Daniel Gibson wrote: How will the licensing issue (forks of the dmd backend are only allowed with your permission) be solved? It shouldn't be a problem as long as those forks are for the purpose of developing patches to the main branch, as is done now in svn. I view it Using the back end to develop a separate compiler, or set oneself up as a distributor of dmd, incorporate it into some other product, etc., Basically, anyone using it has to agree not to sue Symantec or Digital Mars, and conform to: Speaking of which, are you able to remove the "The Software was not designed to operate after December 31, 1999" sentence at all, or does that require you to mess around contacting symantec? Not that anyone reads it, it is kind of off putting to see that over a decade later though for anyone who bothers reading it :P -- Robert http://octarineparrot.com/ Jan 17 2011 Walter Bright <newshound2 digitalmars.com> writes: Robert Clipsham wrote: Speaking of which, are you able to remove the "The Software was not designed to operate after December 31, 1999" sentence at all, or does that require you to mess around contacting symantec? Not that anyone reads it, it is kind of off putting to see that over a decade later though for anyone who bothers reading it :P Consider it like the DNA we all still carry around for fish gills! Jan 17 2011 Robert Clipsham <robert octarineparrot.com> writes: On 17/01/11 20:29, Walter Bright wrote: Robert Clipsham wrote: Speaking of which, are you able to remove the "The Software was not designed to operate after December 31, 1999" sentence at all, or does that require you to mess around contacting symantec? Not that anyone reads it, it is kind of off putting to see that over a decade later though for anyone who bothers reading it :P Consider it like the DNA we all still carry around for fish gills! I don't know about you, but I take full advantage of my gills! -- Robert http://octarineparrot.com/ Jan 17 2011 On Mon, 17 Jan 2011, Walter Bright wrote: Robert Clipsham wrote: Speaking of which, are you able to remove the "The Software was not designed to operate after December 31, 1999" sentence at all, or does that require you to mess around contacting symantec? Not that anyone reads it, it is kind of off putting to see that over a decade later though for anyone who bothers Consider it like the DNA we all still carry around for fish gills! In all seriousness, the backend license makes dmd look very strange. It threw the lawyers I consulted for a serious loop. At a casual glance it gives the impression of software that's massively out of date and out of touch with the real world. I know that updating it would likely be very painful, but is it just painful or impossible? Is it something that money could solve? I'd chip in to a fund to replace the license with something less... odd. Later, Jan 17 2011 Robert Clipsham <robert octarineparrot.com> writes: On 18/01/11 01:09, Brad Roberts wrote: On Mon, 17 Jan 2011, Walter Bright wrote: Robert Clipsham wrote: Speaking of which, are you able to remove the "The Software was not designed to operate after December 31, 1999" sentence at all, or does that require you to mess around contacting symantec? Not that anyone reads it, it is kind of off putting to see that over a decade later though for anyone who bothers Consider it like the DNA we all still carry around for fish gills! In all seriousness, the backend license makes dmd look very strange. It threw the lawyers I consulted for a serious loop. At a casual glance it gives the impression of software that's massively out of date and out of touch with the real world. I know that updating it would likely be very painful, but is it just painful or impossible? Is it something that money could solve? I'd chip in to a fund to replace the license with something less... odd. Later, Make that a nice open source license and I'm happy to throw some money at it too :> -- Robert http://octarineparrot.com/ Jan 18 2011 Johann MacDonagh <johann.macdonagh..no spam..gmail.com> writes: On 1/16/2011 5:07 PM, Walter Bright wrote: We'll be moving dmd, phobos, druntime, and the docs to Github shortly. The accounts are set up, it's just a matter of getting the svn repositories moved and figuring out how it all works. I know very little about git and github, but the discussions about it here and elsewhere online have thoroughly convinced me (and the other devs) that this is the right move for D. I'm sure you've already seen this, but Pro Git is probably the best guide for git. http://progit.org/book/ Once you understand what a commit is, what a tree is, what a merge is, what a branch is, etc... its actually really simple (Chapter 9 in Pro Git). Definitely a radical departure from svn, and a good one for D. Jan 18 2011 retard <re tard.com.invalid> writes: Sun, 16 Jan 2011 21:46:25 +0100, Andrej Mitrovic wrote: With CRTs I could spend a few hours in front of the PC, but after that my eyes would get really tired and I'd have to take a break. Since I switched to LCDs I've never had this problem anymore, I could spend a day staring at screen if I wanted to. Of course, it's still best to take some time off regardless of the screen type. That's a good point. I've already forgotten how much eye strain the old monitors used to cause. Anyway.. how about that Git thing, then? :D :) Jan 16 2011 retard <re tard.com.invalid> writes: Sun, 16 Jan 2011 12:34:36 -0800, Walter Bright wrote: Andrei Alexandrescu wrote: Meanwhile, you are looking at a gamma gun shooting atcha. I always worried about that. Nobody actually found anything wrong, but still. It's like the cell phone studies. Whether they're causing brain tumors or not. Jan 16 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 16/01/2011 19:38, Andrei Alexandrescu wrote: On 1/15/11 10:47 PM, Nick Sabalausky wrote: "Daniel Gibson"<metalcaedes gmail.com> wrote in message news:igtq08$2m1c$1 digitalmars.com... There's two reasons it's good for games: 1. Like you indicated, to get a better framerate. Framerate is more important in most games than resolution. 2. For games that aren't really designed for multiple resolutions, particularly many 2D ones, and especially older games (which are often some of the best, but they look like shit on an LCD). It's a legacy issue. Clearly everybody except you is using CRTs for gaming and whatnot. Therefore graphics hardware producers and game vendors are doing what it takes to adapt to a fixed resolution. Actually, not entirely true, although not for the reasons of old games. Some players of hardcore twitch FPS games (like Quake), especially professional players, still use CRTs, due to the near-zero input lag that LCDs, although having improved in that regard, are still not able to match exactly. But other than that, I really see no reason to stick with CRTs vs a good LCD, yeah. -- Bruno Medeiros - Software Engineer Jan 28 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 16/01/2011 04:47, Nick Sabalausky wrote: There's two reasons it's good for games: 1. Like you indicated, to get a better framerate. Framerate is more important in most games than resolution. This reason was valid at least at some point in time, for me it actually hold me back from transitioning from CRTs to LCDs for some time. But nowadays the screen resolutions have stabilized (stopped increasing, in terms of DPI), and graphics cards have improved in power enough that you can play nearly any game with the LCDs native resolution with max framerate, so no worries with this anymore (you may have to tone down the graphics settings a bit in some cases, but that is fine with me) 2. For games that aren't really designed for multiple resolutions, particularly many 2D ones, and especially older games (which are often some of the best, but they look like shit on an LCD). Well, if your LCD supports it, you have the option of not expanding the screen if output resolution is not the native one. How good or bad that would be, depends on the game I guess. I actually did this some years ago on certain (recent) games for a some time, use only 1024×768 of the 1280x1024 native, to have better framerate. It's not a problem for me for old games, since most of them that I occasionally play are played in console emulator. DOS games unfortunately were very hard to play correctly in XP in the first place (especially with soundblaster), so it's not a concern for me. PS: here's a nice thread for anyone looking to purchase a new LCD: It explains a lot of things about LCD technology, and ranks several LCDs according to intended usage (office work, hardcore gaming, etc.). -- Bruno Medeiros - Software Engineer Jan 28 2011 Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes: Nick Sabalausky wrote: "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message news:igt2pl$2u6e$1 digitalmars.com... On 1/15/11 2:23 AM, Nick Sabalausky wrote: I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution) I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs. Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid $25 for in the first place), or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues. Actually nearly all lcds below 600$-800$price point (tn-panels) have quite inferior display of colors compared to el cheapo crt's, at any resolution. Jan 16 2011 retard <re tard.com.invalid> writes: Sun, 16 Jan 2011 11:56:34 +0100, Lutger Blijdestijn wrote: Nick Sabalausky wrote: "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message news:igt2pl$2u6e$1 digitalmars.com... On 1/15/11 2:23 AM, Nick Sabalausky wrote: I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution) I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs. Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid$25 for in the first place), or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues. Actually nearly all lcds below 600$-800$ price point (tn-panels) have quite inferior display of colors compared to el cheapo crt's, at any resolution. There are also occasional special offers on IPS flat panels. The TN panels have also improved. I bought a cheap 21.5" TN panel as my second monitor last year. The viewing angles are really wide, basically about 180 degrees horizontally, a tiny bit less vertically. I couldn't see any effects of dithering noise either. It has a DVI input and a power consumption of about 30 Watts max (I run it in eco mode). Now that both framerate and view angle problems have been more or less solved for TN panels (except in pivot mode), the only remaining problems is the color reproduction. But it only matters when working with photographs. Jan 16 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/15/11 9:11 PM, Nick Sabalausky wrote: "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org> wrote in message news:igt2pl$2u6e$1 digitalmars.com... On 1/15/11 2:23 AM, Nick Sabalausky wrote: I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution) I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid $25 for in the first place), My last CRT was a 19" from Nokia, 1600x1200, top of the line. Got it for free under the condition that I pick it up myself from a porch, which is as far as its previous owner could move it. I was seriously warned to come with a friend to take it. It weighed 86 lbs. That all worked for me: I was a poor student and happened to have a huge desk at home. I didn't think twice about buying a different monitor when I moved across the country... I wonder how much your 21" CRT weighs. or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues. I think your eyes are more important than your ability to fiddle with resolution. Besides, this whole changing the resolution thing is a consequence of using crappy software. What you want is set the resolution to the maximum and do the rest in software. And guess what - at their maximum, CRT monitors suck compared to flat panels. Heck this is unbelievable... I spend time on the relative merits of flat panels vs. CRTs. I'm outta here. Andrei Jan 16 2011 so <so so.do> writes: Besides, this whole changing the resolution thing is a consequence of using crappy software. What you want is set the resolution to the maximum and do the rest in software. And guess what - at their maximum, CRT monitors suck compared to flat panels. This is just... wrong. Jan 16 2011 "Nick Sabalausky" <a a.a> writes: "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message news:igvc0k$c3o$1 digitalmars.com... On 1/15/11 9:11 PM, Nick Sabalausky wrote: Heh :) Well, I can spend no money and stick with my current 21" CRT that already suits my needs (that I only paid$25 for in the first place), My last CRT was a 19" from Nokia, 1600x1200, top of the line. Got it for free under the condition that I pick it up myself from a porch, which is as far as its previous owner could move it. I was seriously warned to come with a friend to take it. It weighed 86 lbs. That all worked for me: I was a poor student and happened to have a huge desk at home. I didn't think twice about buying a different monitor when I moved across the country... I wonder how much your 21" CRT weighs. No clue. It's my desktop system, so I haven't had a reason to pick up the monitor in years. And the desk seems to handle it just fine. or I can spend a hundred or so dollars to lose the ability to have a decent looking picture at more than one resolution and then say "Gee golly whiz! That sure is a really flat panel!!". Whoop-dee-doo. And popularity and trendyness are just non-issues. I think your eyes are more important than your ability to fiddle with resolution. Everyone always seems to be very vague on that issue. Given real, reliable, non-speculative evidence that CRTs are significantly (and not just negligibly) worse on the eyes, I could certainly be persuaded to replace my CRT when I can actually afford to. Now I'm certainly not saying that such evidence isn't out there, but FWIW, I have yet to come across it. Besides, this whole changing the resolution thing is a consequence of using crappy software. What you want is set the resolution to the maximum and do the rest in software. And guess what - at their maximum, CRT monitors suck compared to flat panels. Agreed, but show me an OS that actually *does* handle that reasonably well. XP doesn't. Win7 doesn't. Ubuntu 9.04 and Kubuntu 10.10 don't. (And I'm definitely not going back to OSX, I've had my fill of that.) Heck this is unbelievable... I spend time on the relative merits of flat panels vs. CRTs. I'm outta here. You're really taking this hard, aren't you? Jan 16 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/16/11 2:22 PM, Nick Sabalausky wrote: "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org> wrote in message news:igvc0k$c3o$1 digitalmars.com... I think your eyes are more important than your ability to fiddle with resolution. Everyone always seems to be very vague on that issue. Given real, reliable, non-speculative evidence that CRTs are significantly (and not just negligibly) worse on the eyes, I could certainly be persuaded to replace my CRT when I can actually afford to. Now I'm certainly not saying that such evidence isn't out there, but FWIW, I have yet to come across it. Finding recent research on dangers of CRTs on eyes is difficult to find for the same reason finding recent research on the dangers of steam locomotives. Still, look at what Google thinks when you type "CRT monitor e". Besides, this whole changing the resolution thing is a consequence of using crappy software. What you want is set the resolution to the maximum and do the rest in software. And guess what - at their maximum, CRT monitors suck compared to flat panels. Agreed, but show me an OS that actually *does* handle that reasonably well. XP doesn't. Win7 doesn't. Ubuntu 9.04 and Kubuntu 10.10 don't. (And I'm definitely not going back to OSX, I've had my fill of that.) I'm happy with the way Ubuntu and OSX handle it. Heck this is unbelievable... I spend time on the relative merits of flat panels vs. CRTs. I'm outta here. You're really taking this hard, aren't you? Apparently I got drawn back into the discussion :o). I'm not as intense about this as one might think, but I do find it surprising that this discussion could possibly occur ever since about 2005. Andrei Jan 16 2011 "Nick Sabalausky" <a a.a> writes: "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message news:igvlf8$v20$1 digitalmars.com... Apparently I got drawn back into the discussion :o). I'm not as intense about this as one might think, but I do find it surprising that this discussion could possibly occur ever since about 2005. FWIW, when computer monitors regularly use the pixel density that the newer iPhones currently have, then I'd imagine that would easily compensate for scaling artifacts on non-native resultions enough to get me to find and get one with a small enough delay (assuming I had the $and needed a new monitor). Jan 16 2011 Walter Bright <newshound2 digitalmars.com> writes: Nick Sabalausky wrote: FWIW, when computer monitors regularly use the pixel density that the newer iPhones currently have, then I'd imagine that would easily compensate for scaling artifacts on non-native resultions enough to get me to find and get one with a small enough delay (assuming I had the$ and needed a new monitor). I bought the iPod with the retina display. That gizmo has done the impossible - converted me into an Apple fanboi. I absolutely love that display. The weird thing is set it next to an older iPod with the lower res display. They look the same. But I find I can read the retina display without reading glasses, and it's much more fatiguing to do that with the older one. Even though they look the same! You can really see the difference if you look at both using a magnifying glass. I can clearly see the screen door even on my super-dee-duper 1900x1200 monitor, but not at all on the iPod. I've held off on buying an iPad because I want one with a retina display, too (and the camera for video calls). Jan 16 2011 "Nick Sabalausky" <a a.a> writes: "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message news:igvlf8$v20$1 digitalmars.com... On 1/16/11 2:22 PM, Nick Sabalausky wrote: "Andrei Alexandrescu"<SeeWebsiteForEmail erdani.org> wrote in message news:igvc0k$c3o$1 digitalmars.com... I think your eyes are more important than your ability to fiddle with resolution. Everyone always seems to be very vague on that issue. Given real, reliable, non-speculative evidence that CRTs are significantly (and not just negligibly) worse on the eyes, I could certainly be persuaded to replace my CRT when I can actually afford to. Now I'm certainly not saying that such evidence isn't out there, but FWIW, I have yet to come across it. Finding recent research on dangers of CRTs on eyes is difficult to find for the same reason finding recent research on the dangers of steam locomotives. Still, look at what Google thinks when you type "CRT monitor e". It's not as clearcut as you may think. One of the first results for "CRT monitor eye": http://www.tomshardware.com/forum/52709-3-best-eyes Keep in mind too, that the vast majority of the reports of CRTs being significantly worse are either have no backing references or are so anecdotal and vague that it's impossible to distinguish from the placebo effect. And there's other variables that rarely get mentioned, like whether they happen to be looking at a CRT with a bad refresh rate or brightness/contrast set too high. I'm not saying that CRTs are definitely as good as or better than LCDs on the eyes, I'm just saying it doesn't seem quite as clear as so many people assume it to be. Jan 16 2011 Walter Bright <newshound2 digitalmars.com> writes: Nick Sabalausky wrote: Keep in mind too, that the vast majority of the reports of CRTs being significantly worse are either have no backing references or are so anecdotal and vague that it's impossible to distinguish from the placebo effect. And there's other variables that rarely get mentioned, like whether they happen to be looking at a CRT with a bad refresh rate or brightness/contrast set too high. My CRTs would gradually get fuzzier over time. It was so slow you didn't notice until you set them next to a new one. Jan 16 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: I found this: http://stackoverflow.com/questions/315911/git-for-beginners-the-definitive-practical-guide A bunch of links to SO questions/answers. Jan 16 2011 retard <re tard.com.invalid> writes: Sun, 16 Jan 2011 15:22:13 -0500, Nick Sabalausky wrote: Dude, you need to upgrade!!! The CRTs have a limited lifetime. It's simply a fact that you need to switch to flat panels or something better. They won't probably even manufacture CRTs anymore. It becomes more and more impossible to purchase *unused* CRTs anywhere. At least at a reasonable price. For example used 17" TFTs cost less than $40. I found pages like this http://shopper.cnet.com/4566-3175_9-0.html Even the prices aren't very competitive. I only remember that all refresh rates below 85 Hz caused me headache and eye fatigue. You can't use the max resolution  60 Hz for very long. Why should *I* spend the money to replace something that already works fine for me? You might get more things done by using a bigger screen. Maybe get some money to buy better equipment and stop complaining. Besides, this whole changing the resolution thing is a consequence of using crappy software. What you want is set the resolution to the maximum and do the rest in software. And guess what - at their maximum, CRT monitors suck compared to flat panels. Agreed, but show me an OS that actually *does* handle that reasonably well. XP doesn't. Win7 doesn't. Ubuntu 9.04 and Kubuntu 10.10 don't. (And I'm definitely not going back to OSX, I've had my fill of that.) My monitors have had about the same pixel density over the years. EGA (640x400) or 720x348 (Hercules) / 12", 800x600 / 14", 1024x768 / 15-17", 1280x1024 / 19", 1280x1024 / 17" TFT, 1440x900 / 19", 1920x1080 / 21.5", 2560x1600 / 30" Thus, there's no need to enlarge all graphical widgets or text. My vision is still ok. What changes is the amount of simultaneously visible area for applications. You're just wasting the expensive screen estate by enlarging everything. You're supposed to run more simultaneous tasks on a larger screen. I've actually compared the rated power consumpsion between CRTs and LCDs of similar size and was actually surprised to find that there was little, if any, real difference at all on the sets I compared. I'm pretty sure I did point out the limitations of my observation: "...on all the sets I compared". And it's pretty obvious I wasn't undertaking a proper extensive study. There's no need for sarcasm. Your comparison was pointless. You can come up with all kinds of arbitrary comparisons. The TFT panel power consumption probably varies between 20 and 300 Watts. Do you even know how much your CRT uses power? CRTs used as computer monitors and those used as televisions have different characteristics. CRT TVs have better brightness and contrast, but lower resolution and sharpness than CRT computer monitors. Computer monitors tend to need more power, maybe even twice as much. Also larger monitors of the same brand tend to use more power. When a CRT monitor gets older, you need more power to illuminate the phosphor as the amount of phosphor in the small holes of the grille/mask decreases over time. This isn't the case with TFTs. The backlight brightness and panel's color handling dictates power consumption. A 15" TFT might need as much power as a 22" TFT using the same panel technology. TFT TVs use more power as they typically provide higher brightness. Same thing if you buy those high quality panels for professional graphics work. The TFT power consumption has also drastically dropped because of AMOLED panels, LED backlights and better dynamic contrast logic. The fluorescent backlights lose some of their brightness (maybe about 30%) before dying unlike a CRT which totally goes dark. The LED backlights wont suffer from this (at least observably). My obversation is that e.g. in computer classes (30+ computers per room) the air conditioning started to work much better after the upgrade to flat panels. Another upgrade turned the computers into micro-itx thin clients. Now the room doesn't need air conditioning at all. Jan 16 2011 "Nick Sabalausky" <a a.a> writes: "retard" <re tard.com.invalid> wrote in message news:ih0b1t$g2g$3 digitalmars.com... For example used 17" TFTs cost less than$40. Continuing to use my 21" CRT costs me nothing. Even the prices aren't very competitive. I only remember that all refresh rates below 85 Hz caused me headache and eye fatigue. You can't use the max resolution 60 Hz for very long. I run mine no lower than 85 Hz. It's about 100Hz at the moment. And I never need to run it at the max rez for long. It's just nice to be able to bump it up now and then when I want to. Then it goes back down. And yet people feel the need to bitch about me liking that ability. Why should *I* spend the money to replace something that already works fine for me? You might get more things done by using a bigger screen. Maybe get some money to buy better equipment and stop complaining. You've got to be kidding me...*other* people start giving *me* crap about what *I* choose to use, and you try to tell me *I'm* the one that needs to stop complaining? I normally try very much to avoid direct personal comments and only attack the arguments not the arguer, but seriously, what the hell is wrong with your head that you could even think of such an enormously idiotic thing to say? Meh, I'm not going to bother with the rest... Jan 16 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Sunday 16 January 2011 23:17:22 Nick Sabalausky wrote: "retard" <re tard.com.invalid> wrote in message news:ih0b1t$g2g$3 digitalmars.com... For example used 17" TFTs cost less than $40. Continuing to use my 21" CRT costs me nothing. Even the prices aren't very competitive. I only remember that all refresh rates below 85 Hz caused me headache and eye fatigue. You can't use the max resolution 60 Hz for very long. I run mine no lower than 85 Hz. It's about 100Hz at the moment. I've heard that the eye fatigue at 60 Hz is because it matches electricity for the light bulbs in the room, so then the flickering of the light bulbs and the screen match. Keeping it above 60 Hz avoids the problem. 100Hz is obviously well above that. And I never need to run it at the max rez for long. It's just nice to be able to bump it up now and then when I want to. Then it goes back down. And yet people feel the need to bitch about me liking that ability. You can use whatever you want for all I care. It's your computer, your money, and your time. I just don't understand what the point of messing with your resolution is. I've always just set it at the highest possible level that I can. I've currently got 1920 x 1200 on a 24" monitor, but it wouldn't hurt my feelings any to get a higher resolution. I probably won't, simply because I'm more interested in getting a second monitor than a higher resolution, and I don't want to fork out for two monitors to get a dual monitor setup (since I want both monitors to be the same size) when I already have a perfectly good monitor, but I'd still like a higher resolution. So, the fact that you have and want a CRT and actually want the ability to adjust the resolution baffles me, but I see no reason to try and correct you or complain about it. - Jonathan M Davis Jan 16 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Saturday 15 January 2011 13:13:41 Andrei Alexandrescu wrote: On 1/15/11 2:23 AM, Nick Sabalausky wrote: I still use CRTs (one big reason being that I hate the idea of only being able to use one resolution) I'd read some post of Nick and think "hmm, now that's a guy who follows only his own beat" but this has to take the cake. From here on, I wouldn't be surprised if you found good reasons to use whale fat powered candles instead of lightbulbs. But don't you just _hate_ the fact that lightbulbs don't smell? How can you stand that? ;) Yes. That does take the cake. And I want it back, since cake sounds good right now. LOL. This thread has serious been derailed. I wonder if I should start a new one one the source control issue. I'd _love_ to be able to use git with Phobos and druntime rather than svn, and much as I've never used mercurial and have no clue how it compares to git, it would have to be an improvement over svn. Unfortunately, that topic seems to have not really ultimately gone anywhere in this thread. - Jonathan M Davis Jan 15 2011 =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes: Nick Sabalausky wrote: "retard" <re tard.com.invalid> wrote in message=20 Hard drives: these always fail, sooner or later. There's nothing you c= an do except RAID and backups =20 And SMART monitors: =20 I've had a total of two HDD's fail, and in both cases I really lucked o= ut.=20 The first one was in my Mac, but it was after I was already getting=20 completely fed up with OSX and Apple, so I didn't really care much - I = was=20 mostly back on Windows again by that point. The second failure just hap= pened=20 to be the least important of the three HDDs in my system. I was still p= retty=20 upset about it though, so it was a big wakeup call: I *will not* have a= =20 primary system anymore that doesn't have a SMART monitoring program, wi= th=20 temperature readouts, always running. And yes, it can't always predict = a=20 failure, but sometimes it can so IMO there's no good reason not to have= it.=20 That's actually one of the things I don't like about Linux, nothing lik= e=20 that seems to exist for Linux. Sure, there's a cmd line program you can= =20 poll, but that doesn't remotely cut it. =20 Simple curiosity: what do you use for SMART monitoring on Windows? I use smard (same as Linux) but where I am reasonably confident that on Linux it will email me if it detects an error condition, I am not as sure of being notified on Windows (where email is not an option because it is at work and Lotus will not accept email from sources other than those explicitly allowed by the IT admins). Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr Jan 16 2011 "Nick Sabalausky" <a a.a> writes: ""Jérôme M. Berger"" <jeberger free.fr> wrote in message news:iguask$1dur$1 digitalmars.com... Simple curiosity: what do you use for SMART monitoring on Windows? I use smard (same as Linux) but where I am reasonably confident that on Linux it will email me if it detects an error condition, I am not as sure of being notified on Windows (where email is not an option because it is at work and Lotus will not accept email from sources other than those explicitly allowed by the IT admins). Hard Disk Sentinel. I'm not married to it or anything, but it seems to be pretty good. Jan 16 2011 =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes: Nick Sabalausky wrote: ""J=EF=BF=BDr=EF=BF=BDme M. Berger"" <jeberger free.fr> wrote in messag= e=20 news:iguask$1dur$1 digitalmars.com... Simple curiosity: what do you use for SMART monitoring on Windows? I use smard (same as Linux) but where I am reasonably confident that on Linux it will email me if it detects an error condition, I am not as sure of being notified on Windows (where email is not an option because it is at work and Lotus will not accept email from sources other than those explicitly allowed by the IT admins). =20 Hard Disk Sentinel. I'm not married to it or anything, but it seems to = be=20 pretty good. =20 =20 Thanks, I'll have a look. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr Jan 17 2011 "Nick Sabalausky" <a a.a> writes: "Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message news:mailman.571.1294806486.4748.digitalmars-d puremagic.com... Notice the smiley face -> :D Yeah I didn't check the price, it's only 30$. But there's no telling if that would work either. Also, dirt cheap video cards are almost certainly going to cause problems. Even if the drivers worked perfectly, a year down the road things will start breaking down. Cheap hardware is cheap for a reason. Rediculous. All of the video cards I'm using are ultra-cheap ones that are about 10 years old and they all work fine. Jan 12 2011 "Nick Sabalausky" <a a.a> writes: "Nick Sabalausky" <a a.a> wrote in message news:igkv8v$2gq$1 digitalmars.com... "Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message news:mailman.571.1294806486.4748.digitalmars-d puremagic.com... Notice the smiley face -> :D Yeah I didn't check the price, it's only 30$. But there's no telling if that would work either. Also, dirt cheap video cards are almost certainly going to cause problems. Even if the drivers worked perfectly, a year down the road things will start breaking down. Cheap hardware is cheap for a reason. Rediculous. All of the video cards I'm using are ultra-cheap ones that are about 10 years old and they all work fine. They're cheap because they have lower clock speeds, fewer features, and less memory. Jan 12 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/12/11, Nick Sabalausky <a a.a> wrote: Rediculous. All of the video cards I'm using are ultra-cheap ones that are about 10 years old and they all work fine. I'm saying that if you buy a cheap video card *today* you might not get what you expect. And I'm not talking out of my ass, I've had plenty of experience with faulty hardware and device drivers. The 'quality' depends more on who makes the product than what price tag it has, but you have to look these things up and not buy things on first sight because they're cheap. Jan 12 2011 retard <re tard.com.invalid> writes: Wed, 12 Jan 2011 14:22:59 -0500, Nick Sabalausky wrote: "Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message news:mailman.571.1294806486.4748.digitalmars-d puremagic.com... Notice the smiley face -> :D Yeah I didn't check the price, it's only 30$. But there's no telling if that would work either. Also, dirt cheap video cards are almost certainly going to cause problems. Even if the drivers worked perfectly, a year down the road things will start breaking down. Cheap hardware is cheap for a reason. Rediculous. All of the video cards I'm using are ultra-cheap ones that are about 10 years old and they all work fine. There's no reason why they would break. Few months ago I was reconfiguring an old server at work which still used two 16-bit 10 megabit ISA network cards. I fetched a kernel upgrade (2.6.27.something). It's a modern kernel which is still maintained and had up-to-date drivers for the 20 year old device! Those devices have no moving parts and are stored inside EMP & UPS protected strong server cases. How the heck could they break? Same thing, can't imagine how a video card could break. The old ones didn't even have massive cooling solutions, the chips didn't even need a heatsink. The only problem is driver support, but on Linux it mainly gets better over the years. Jan 12 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Wednesday 12 January 2011 13:11:13 retard wrote: Wed, 12 Jan 2011 14:22:59 -0500, Nick Sabalausky wrote: "Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message news:mailman.571.1294806486.4748.digitalmars-d puremagic.com... Notice the smiley face -> :D Yeah I didn't check the price, it's only 30$. But there's no telling if that would work either. Also, dirt cheap video cards are almost certainly going to cause problems. Even if the drivers worked perfectly, a year down the road things will start breaking down. Cheap hardware is cheap for a reason. Rediculous. All of the video cards I'm using are ultra-cheap ones that are about 10 years old and they all work fine. There's no reason why they would break. Few months ago I was reconfiguring an old server at work which still used two 16-bit 10 megabit ISA network cards. I fetched a kernel upgrade (2.6.27.something). It's a modern kernel which is still maintained and had up-to-date drivers for the 20 year old device! Those devices have no moving parts and are stored inside EMP & UPS protected strong server cases. How the heck could they break? Same thing, can't imagine how a video card could break. The old ones didn't even have massive cooling solutions, the chips didn't even need a heatsink. The only problem is driver support, but on Linux it mainly gets better over the years. It depends on a number of factors, including the quality of the card and the conditions that it's being used in. I've had video cards die before. I _think_ that it was due to overheating, but I really don't know. It doesn't really matter. The older the part, the more likely it is to break. The cheaper the part, the more likely it is to break. Sure, the lack of moving parts makes it less likely for a video card to die, but it definitely happens. Computer parts don't last forever, and the lower their quality, the less likely it is that they'll last. By no means does that mean that a cheap video card isn't necessarily going to last for years and function just fine, but it is a risk that a cheap card will be too cheap to last. - Jonathan M Davis Jan 12 2011 retard <re tard.com.invalid> writes: Wed, 12 Jan 2011 13:22:28 -0800, Jonathan M Davis wrote: On Wednesday 12 January 2011 13:11:13 retard wrote: Same thing, can't imagine how a video card could break. The old ones didn't even have massive cooling solutions, the chips didn't even need a heatsink. The only problem is driver support, but on Linux it mainly gets better over the years. It depends on a number of factors, including the quality of the card and the conditions that it's being used in. Of course. I've had video cards die before. I _think_ that it was due to overheating, but I really don't know. It doesn't really matter. Modern GPU and CPU parts are of course getting hotter and hotter. They're getting so hot it's a miracle the components such as capacitors nearby the cores can handle it. You need better cooling which means even more breaking parts. The older the part, the more likely it is to break. Not true. http://en.wikipedia.org/wiki/Bathtub_curve The cheaper the part, the more likely it is to break. That might be true if the part is a power supply or a monitor. However, the latest and greatest video cards and CPUs are sold at an extremely high price mainly for hardcore gamers (and 3d modelers -- quadro & firegl). This is sometimes purely an intellectual property issue, nothing to do with physical parts. For example I've earned several hundred euros by installing soft-mods, that is upgraded firmware / drivers. Ever heard of Radeon 9500 -> 9700, 9800SE -> 9800, and lately 6950 -> 6970 mods? I've also modded one PC NVIDIA card to work on Macs (sold at a higher price) and done one Geforce -> Quadro mod. You don't touch the parts at all, just flash the ROM. It would be a miracle if that improved the physical quality of the parts. It does raise the price, though. Another observation: the target audience of the low end NVIDIA cards are usually HTPC and office users. These computers have small cases and require low profile cards. The cards have actually *better* multimedia features (purevideo) than the high end cards for gamers. These cards are built by the same companies as the larger versions (Asus, MSI, Gigabyte, and so on). Could it just be that by giving the buyer less physical parts and less intellectual property in the form of GPU firmware, they can sell at a lower price. There are also these cards with the letters "OC" in their name. The manufacturer has deliberately overclocked the cards beyond their specs. That's actually hurting the reliability but the price is even bigger. Jan 12 2011 Jeff Nowakowski <jeff dilacero.org> writes: On 01/12/2011 04:11 PM, retard wrote: Same thing, can't imagine how a video card could break. I recently had a cheap video card break. It at least had the decency to break within the warranty period, but I was too lazy to return it :P I decided that the integrated graphics, while slow, were "good enough" for what I was using the machine for. Jan 12 2011 Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes: Wow. The thread that went "Moving to D"->"Problems with DMD"->"DVCS"->"WHICH DVCS"->"Linux Problems"->"Driver Problems/Manufacturer preferences"->"Cheap VS. Expensive". It's a personally observed record of OT threads, I think. Anyways, I've refrained from throwing fuel on the thread as long as I can, I'll bite: It depends on a number of factors, including the quality of the card and the conditions that it's being used in. I've had video cards die before. I _think_ that it was due to overheating, but I really don't know. It doesn't really matter. The older the part, the more likely it is to break. The cheaper the part, the more likely it is to break. Sure, the lack of moving parts makes it less likely for a video card to die, but it definitely happens. Computer parts don't last forever, and the lower their quality, the less likely it is that they'll last. By no means does that mean that a cheap video card isn't necessarily going to last for years and function just fine, but it is a risk that a cheap card will be too cheap to last. "Cheap" in the sense of "less money" isn't the problem. Actually, HW that cost more is often high-end HW which creates more heat, which _might_ actually shorten the lifetime. On the other hand, low-end HW is often less heat-producing, which _might_ make it last longer. The real difference lies in what level of HW are sold at which clock-levels, I.E. manufacturing control procedures. So an expensive low-end for a hundred bucks might easily outlast a cheap high-end alternative for 4 times the money. Buy quality, not expensive. There is a difference. Jan 12 2011 retard <re tard.com.invalid> writes: Wed, 12 Jan 2011 22:46:46 +0100, Ulrik Mikaelsson wrote: Wow. The thread that went "Moving to D"->"Problems with DMD"->"DVCS"->"WHICH DVCS"->"Linux Problems"->"Driver Problems/Manufacturer preferences"->"Cheap VS. Expensive". It's a personally observed record of OT threads, I think. Anyways, I've refrained from throwing fuel on the thread as long as I can, I'll bite: It depends on a number of factors, including the quality of the card and the conditions that it's being used in. I've had video cards die before. I _think_ that it was due to overheating, but I really don't know. It doesn't really matter. The older the part, the more likely it is to break. The cheaper the part, the more likely it is to break. Sure, the lack of moving parts makes it less likely for a video card to die, but it definitely happens. Computer parts don't last forever, and the lower their quality, the less likely it is that they'll last. By no means does that mean that a cheap video card isn't necessarily going to last for years and function just fine, but it is a risk that a cheap card will be too cheap to last. "Cheap" in the sense of "less money" isn't the problem. Actually, HW that cost more is often high-end HW which creates more heat, which _might_ actually shorten the lifetime. On the other hand, low-end HW is often less heat-producing, which _might_ make it last longer. The real difference lies in what level of HW are sold at which clock-levels, I.E. manufacturing control procedures. So an expensive low-end for a hundred bucks might easily outlast a cheap high-end alternative for 4 times the money. Buy quality, not expensive. There is a difference. Nicely written, I fully agree with you. Jan 12 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/12/11 2:30 PM, retard wrote: Wed, 12 Jan 2011 22:46:46 +0100, Ulrik Mikaelsson wrote: Wow. The thread that went "Moving to D"->"Problems with DMD"->"DVCS"->"WHICH DVCS"->"Linux Problems"->"Driver Problems/Manufacturer preferences"->"Cheap VS. Expensive". It's a personally observed record of OT threads, I think. Anyways, I've refrained from throwing fuel on the thread as long as I can, I'll bite: It depends on a number of factors, including the quality of the card and the conditions that it's being used in. I've had video cards die before. I _think_ that it was due to overheating, but I really don't know. It doesn't really matter. The older the part, the more likely it is to break. The cheaper the part, the more likely it is to break. Sure, the lack of moving parts makes it less likely for a video card to die, but it definitely happens. Computer parts don't last forever, and the lower their quality, the less likely it is that they'll last. By no means does that mean that a cheap video card isn't necessarily going to last for years and function just fine, but it is a risk that a cheap card will be too cheap to last. "Cheap" in the sense of "less money" isn't the problem. Actually, HW that cost more is often high-end HW which creates more heat, which _might_ actually shorten the lifetime. On the other hand, low-end HW is often less heat-producing, which _might_ make it last longer. The real difference lies in what level of HW are sold at which clock-levels, I.E. manufacturing control procedures. So an expensive low-end for a hundred bucks might easily outlast a cheap high-end alternative for 4 times the money. Buy quality, not expensive. There is a difference. Nicely written, I fully agree with you. Same here. It's not well understood that heating/cooling cycles with the corresponding expansion and contraction cycles are the main reason for which electronics fail. At an extreme, the green-minded person who turns all CFLs and all computers off at all opportunities ends up producing more expense and more waste than the lazier person who leaves stuff on for longer periods of time. Andrei Jan 12 2011 Walter Bright <newshound2 digitalmars.com> writes: retard wrote: There's no reason why they would break. Few months ago I was reconfiguring an old server at work which still used two 16-bit 10 megabit ISA network cards. I fetched a kernel upgrade (2.6.27.something). It's a modern kernel which is still maintained and had up-to-date drivers for the 20 year old device! Those devices have no moving parts and are stored inside EMP & UPS protected strong server cases. How the heck could they break? Same thing, can't imagine how a video card could break. The old ones didn't even have massive cooling solutions, the chips didn't even need a heatsink. The only problem is driver support, but on Linux it mainly gets better over the years. I paid my way through college hand-making electronics boards for professors and engineers. All semiconductors have a lifetime that is measured by the area under the curve of their temperature over time. The doping in the semiconductor gradually diffuses through the semiconductor, the rate of diffusion increases as the temperature rises. Once the differently doped parts "collide" the semiconductor fails. Jan 12 2011 Eric Poggel <dnewsgroup2 yage3d.net> writes: On 1/12/2011 6:41 PM, Walter Bright wrote: All semiconductors have a lifetime that is measured by the area under the curve of their temperature over time. Oddly enough, milk has the same behavior. Jan 28 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 12.01.2011 04:02, schrieb Jean Crystof: Walter Bright Wrote: My mobo is an ASUS M2A-VM. No graphics cards, or any other cards plugged into it. It's hardly weird or wacky or old (it was new at the time I bought it to install Ubuntu). ASUS M2A-VM has 690G chipset. Wikipedia says: http://en.wikipedia.org/wiki/AMD_690_chipset_series#690G "AMD recently dropped support for Windows and Linux drivers made for Radeon X1250 graphics integrated in the 690G chipset, stating that users should use the open-source graphics drivers instead. The latest available AMD Linux driver for the 690G chipset is fglrx version 9.3, so all newer Linux distributions using this chipset are unsupported." I guess a recent version of the free drivers (as delivered with recent Ubuntu releases) still is much better than the one in Walters >2 Years old Ubuntu. Sure, game performance may not be great, but I guess normal working (even in 1920x1200) and watching youtube videos works. Fast forward to this day: http://www.phoronix.com/scan.php?page=article&item=amd_driver_q111&num=2 Benchmark page says: the only available driver for your graphics gives only about 10-20% of the real performance. Why? ATI sucks on Linux. Don't buy ATI. Buy Nvidia instead: No it doesn't. The X1250 uses the same driver as the X1950 which is much more mature and also faster than the free driver for the Radeon HD *** cards (for which a proprietary Catalyst driver is still provided). http://geizhals.at/a466974.html This is 3rd latest Nvidia GPU generation. How long support lasts? Ubuntu 10.10 still supports all Geforce 2+ which is 10 years old. I foretell Ubuntu 19.04 is last one supporting this. Use Nvidia and your problems are gone. I agree that a recent nvidia card may improve things even further. Cheers, - Daniel Jan 12 2011 retard <re tard.com.invalid> writes: Wed, 12 Jan 2011 19:11:22 +0100, Daniel Gibson wrote: Am 12.01.2011 04:02, schrieb Jean Crystof: Walter Bright Wrote: My mobo is an ASUS M2A-VM. No graphics cards, or any other cards plugged into it. It's hardly weird or wacky or old (it was new at the time I bought it to install Ubuntu). ASUS M2A-VM has 690G chipset. Wikipedia says: http://en.wikipedia.org/wiki/AMD_690_chipset_series#690G "AMD recently dropped support for Windows and Linux drivers made for Radeon X1250 graphics integrated in the 690G chipset, stating that users should use the open-source graphics drivers instead. The latest available AMD Linux driver for the 690G chipset is fglrx version 9.3, so all newer Linux distributions using this chipset are unsupported." I guess a recent version of the free drivers (as delivered with recent Ubuntu releases) still is much better than the one in Walters >2 Years old Ubuntu. Most likely. After all they're fixing more bugs than creating new ones. :-) My other guess is, while the open source drivers are far from perfect for hardcore gaming, the basic functionality like setting up a video mode is getting better. Remember the days you needed to type in all internal and external clock frequencies and packed pixel bit counts in xorg.conf ?! Sure, game performance may not be great, but I guess normal working (even in 1920x1200) and watching youtube videos works. Embedded videos on web pages used to require huge amounts of CPU power when you were upscaling them in the fullscreen mode. The reason is that Flash only recently starting supporting hardware accelerated videos, on ***32-bit*** systems equipped with a ***NVIDIA*** card. The same VDPAU libraries are used by the native video players. I tried to accelerate video playback with my Radeon HD 5770, but it failed badly. Believe it or not, my 3 Ghz 4-core Core i7 system with 24 GB of RAM and the fast Radeon HD 5770 was too slow to play 1080p videos 1920x1080 using the open source drivers. Without hardware acceleration you need a modern high-end dual-core system or faster to run the video assuming the drivers aren't broken. If you only want to watch the youtube videos in windowed mode, you still need a 2+ Ghz single-core. But.. Youtube has switched to HTML5 videos recently. This should take the requirements down a notch. Still I wouldn't trust integrated graphics that much. They've always been crap. Jan 12 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 06.01.2011 20:46, schrieb Walter Bright: Russel Winder wrote: Pity, because using one of Mercurial, Bazaar or Git instead of Subversion is likely the best and fastest way of getting more quality contributions to review. Although only anecdotal in every case where a team has switched to DVCS from CVCS -- except in the case of closed projects, obviously -- it has opened things up to far more people to provide contributions. Subversion is probably now the single biggest barrier to getting input on system evolution. A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. It's not SVN but trac doing this. And trac's mercurial plugin seems to support that as well: http://trac.edgewall.org/wiki/TracMercurial#MercurialChangesets Bitbucket also supports that kind of view, see for example: https://bitbucket.org/goshawk/gdc/changeset/44b6978e5f6c The GitPlugin should support that as well, if I interpret the feature list correctly: http://trac-hacks.org/wiki/GitPlugin Dsource seems to support both git and mercurial, but I don't know which projects use them, else I'd them as examples to see how those trac plugins work in real life. Cheers, - Daniel Jan 06 2011 "Nick Sabalausky" <a a.a> writes: "Daniel Gibson" <metalcaedes gmail.com> wrote in message news:ig57ar$1gn9$1 digitalmars.com... Am 06.01.2011 20:46, schrieb Walter Bright: Russel Winder wrote: Pity, because using one of Mercurial, Bazaar or Git instead of Subversion is likely the best and fastest way of getting more quality contributions to review. Although only anecdotal in every case where a team has switched to DVCS from CVCS -- except in the case of closed projects, obviously -- it has opened things up to far more people to provide contributions. Subversion is probably now the single biggest barrier to getting input on system evolution. A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. It's not SVN but trac doing this. And trac's mercurial plugin seems to support that as well: http://trac.edgewall.org/wiki/TracMercurial#MercurialChangesets Bitbucket also supports that kind of view, see for example: https://bitbucket.org/goshawk/gdc/changeset/44b6978e5f6c The GitPlugin should support that as well, if I interpret the feature list correctly: http://trac-hacks.org/wiki/GitPlugin Dsource seems to support both git and mercurial, but I don't know which projects use them, else I'd them as examples to see how those trac plugins work in real life. DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd Jan 06 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 06.01.2011 23:26, schrieb Nick Sabalausky: "Daniel Gibson"<metalcaedes gmail.com> wrote in message news:ig57ar$1gn9$1 digitalmars.com... Am 06.01.2011 20:46, schrieb Walter Bright: Russel Winder wrote: Pity, because using one of Mercurial, Bazaar or Git instead of Subversion is likely the best and fastest way of getting more quality contributions to review. Although only anecdotal in every case where a team has switched to DVCS from CVCS -- except in the case of closed projects, obviously -- it has opened things up to far more people to provide contributions. Subversion is probably now the single biggest barrier to getting input on system evolution. A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. It's not SVN but trac doing this. And trac's mercurial plugin seems to support that as well: http://trac.edgewall.org/wiki/TracMercurial#MercurialChangesets Bitbucket also supports that kind of view, see for example: https://bitbucket.org/goshawk/gdc/changeset/44b6978e5f6c The GitPlugin should support that as well, if I interpret the feature list correctly: http://trac-hacks.org/wiki/GitPlugin Dsource seems to support both git and mercurial, but I don't know which projects use them, else I'd them as examples to see how those trac plugins work in real life. DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd http://www.dsource.org/projects/ddmd/changeset?new=rt%40185%3A13cf8da225ce&old=rt%40183%3A190ba98276b3 "Trac detected an internal error:" looks like dsource uses an old/broken version of the mercurial plugin. But normally it *should* work, I think. Jan 06 2011 Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes: Daniel Gibson wrote: Am 06.01.2011 23:26, schrieb Nick Sabalausky: "Daniel Gibson"<metalcaedes gmail.com> wrote in message news:ig57ar$1gn9$1 digitalmars.com... Am 06.01.2011 20:46, schrieb Walter Bright: Russel Winder wrote: Pity, because using one of Mercurial, Bazaar or Git instead of Subversion is likely the best and fastest way of getting more quality contributions to review. Although only anecdotal in every case where a team has switched to DVCS from CVCS -- except in the case of closed projects, obviously -- it has opened things up to far more people to provide contributions. Subversion is probably now the single biggest barrier to getting input on system evolution. A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. It's not SVN but trac doing this. And trac's mercurial plugin seems to support that as well: http://trac.edgewall.org/wiki/TracMercurial#MercurialChangesets Bitbucket also supports that kind of view, see for example: https://bitbucket.org/goshawk/gdc/changeset/44b6978e5f6c The GitPlugin should support that as well, if I interpret the feature list correctly: http://trac-hacks.org/wiki/GitPlugin Dsource seems to support both git and mercurial, but I don't know which projects use them, else I'd them as examples to see how those trac plugins work in real life. DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd http://www.dsource.org/projects/ddmd/changeset?new=rt%40185%3A13cf8da225ce&old=rt%40183%3A190ba98276b3 "Trac detected an internal error:" looks like dsource uses an old/broken version of the mercurial plugin. But normally it *should* work, I think. This works: http://www.dsource.org/projects/ddmd/changeset/183:190ba98276b3 Jan 06 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/6/11 4:26 PM, Nick Sabalausky wrote: DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd The ready availability of Mercurial on dsource.org plus Don's inclination to use Mercurial just tipped the scale for me. We should do all we can to make Don's and other developers' life easier, and being able to work on multiple fixes at a time is huge. We should create a new Mercurial repository. I suggest we call it digitalmars because the current "phobos" svn repo contains a lot of stuff that's not phobos-related. Andrei Jan 06 2011 Brad Roberts <braddr slice-2.puremagic.com> writes: On Thu, 6 Jan 2011, Andrei Alexandrescu wrote: On 1/6/11 4:26 PM, Nick Sabalausky wrote: DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd The ready availability of Mercurial on dsource.org plus Don's inclination to use Mercurial just tipped the scale for me. We should do all we can to make Don's and other developers' life easier, and being able to work on multiple fixes at a time is huge. We should create a new Mercurial repository. I suggest we call it digitalmars because the current "phobos" svn repo contains a lot of stuff that's not phobos-related. Andrei Personally, I'd prefer to git over mecurial, which dsource also supports. But, really, I'd prefer github over dsource (sorry, BradA) for stability and just generally much more usable site. My general problem with the switch to a different SCM of any sort: 1) the history of the current source is a mess. a) lack of tags for releases b) logical merges have all been done as individual commits 2) walter's workflow meaning that he'll won't use the scm merge facilities. He manually merges everything. None of this is really a problem, it just becomes a lot more visible when using a system that encourages keeping a very clean history and the use of branches and merging. My 2 cents, Brad Jan 06 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: I've ever only used hg (mercurial), but only for some private repositories. I'll say one thing: it's pretty damn fast considering it requires Python to work. Also, Joel's tutorial that introduced me to hg was short and and to the point: http://hginit.com/ Jan 06 2011 "Nick Sabalausky" <a a.a> writes: "Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message news:mailman.459.1294356168.4748.digitalmars-d puremagic.com... I've ever only used hg (mercurial), but only for some private repositories. I'll say one thing: it's pretty damn fast considering it requires Python to work. Also, Joel's tutorial that introduced me to hg was short and and to the point: http://hginit.com/ I have to comment on this part: "The main way you notice this is that in Subversion, if you go into a subdirectory and commit your changes, it only commits changes in that subdirectory and all directories below it, which potentially means you've forgotten to check something in that lives in some other subdirectory which also changed. Whereas, in Mercurial, all commands always apply to the entire tree. If your code is in c:\code, when you issue the hg commit command, you can be in c:\code or in any subdirectory and it has the same effect." Funny thing about that: After accidentally committing a subdirectory instead of the full project one too many times, I submitted a TortoiseSVN feature request for an option to always commit the full working directory, or at least an option to warn when you're not committing the full working directory. They absolutely lynched me for having such a suggestion. Jan 08 2011 Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes: Funny thing about that: After accidentally committing a subdirectory instead of the full project one too many times, I submitted a TortoiseSVN feature request for an option to always commit the full working directory, or at least an option to warn when you're not committing the full working directory. They absolutely lynched me for having such a suggestion. Of course. You're in conflict with the only hardly-functional branching support SVN knows. (Copy directory) I know some people who considers it a feature to always check out entire SVN-repos, including all branches and all tags. Of course, they are the same people who set aside half-days to do the checkout, and considers it a days work to actually merge something back. Jan 08 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Fri, 07 Jan 2011 01:03:42 +0200, Brad Roberts <braddr slice-2.puremagic.com> wrote: 2) walter's workflow meaning that he'll won't use the scm merge facilities. He manually merges everything. Not sure about Hg, but in Git you can solve this by simply manually specifying the two parent commits. Git doesn't care how you merged the two branches. In fact, you can even do this locally by using grafts. -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 06 2011 Russel Winder <russel russel.org.uk> writes: On Thu, 2011-01-06 at 15:03 -0800, Brad Roberts wrote: [ . . . ] 1) the history of the current source is a mess. a) lack of tags for releases b) logical merges have all been done as individual commits Any repository coming to DVCS from CVS or Subversion will have much worse than this :-((( In the end you have to bite the bullet and so "let's do it, and repair stuff later if we have to".=20 2) walter's workflow meaning that he'll won't use the scm merge facilities. He manually merges everything. At a guess I would say that this is more an issue that CVS and Subversion have truly outdated ideas about branching and merging. Indeed merging branches in Subversion seems still to be so difficult it makes a shift to DVCS the only way forward. None of this is really a problem, it just becomes a lot more visible when= =20 using a system that encourages keeping a very clean history and the use o= f=20 branches and merging. And no rebasing! At the risk of over-egging the pudding: No organization or project I have knowledge of that made the shift from CVS or Subversion to DVCS (Mercurial, Bazaar, or Git) has ever regretted it. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 07 2011 bearophile <bearophileHUGS lycos.com> writes: Andrei: The ready availability of Mercurial on dsource.org plus Don's inclination to use Mercurial just tipped the scale for me. We should do all we can to make Don's and other developers' life easier, and being able to work on multiple fixes at a time is huge. Probably both Mercurial and Git are a little improvement over the current situation. It's also a way to improve D image, making it look a little more open source. I hope a fat "Fork me!" button will be visible on a web page :-) Bye, bearophile Jan 06 2011 David Nadlinger <see klickverbot.at> writes: On 1/6/11 11:47 PM, Andrei Alexandrescu wrote: Mercurial on dsource.org … Personally, I'd really like to persuade Walter, you, and whoever else actually decides this to consider hosting the main repository at an external place like GitHub or Mercurial, because DSource has been having some real troubles with stability, although it got slightly better again recently. The problem is somewhat alleviated when using a DVCS, but having availabilities the main source repositories is not quite the best form of advertisement for a language. Additionally, the UI of GitHub supports the scenario where only a few people (or Walter alone) actually have commit/push access to the main repository really well through cheap forks which stay logically connected to he main repository and merge requests. The ability to make comments on specific (lines in) commits, also in combination with pull requests, is awesome as well. I would also like to suggest Git over Mercurial, though this is mostly personal preference – it is used more widely, it has GitHub and Gitorious (I'm having a hard time finding Bitbucket comparable personally), it's proven to work well in settings where the main tree is managed by a single person (->Linux), it tries not artificially restricting you as much as possible (something I imagine Walter might like), … – but again, it's probably a matter of taste, I don't want to start a flamewar here. The most important thing to me is, however, that I'd really like to see a general shift in the way D development is done towards more contributor-friendliness. I can only bow to Walter as a very capable and experienced compiler writer, but as it was discussed several times here on the list as well, in my opinion D has reached a point where it desperately needs to win new contributors to the whole ecosystem. There is a reason why other open source projects encourage you to write helpful commit messages, and yet we don't even have tags for releases (!) in the DMD repository. I didn't intend to offend anybody at all, but I'd really hate to see D2 failing to »take off« for reasons like this… David Jan 06 2011 "Nick Sabalausky" <a a.a> writes: "David Nadlinger" <see klickverbot.at> wrote in message news:ig5n74$2vu3$1 digitalmars.com... On 1/6/11 11:47 PM, Andrei Alexandrescu wrote: Mercurial on dsource.org . Personally, I'd really like to persuade Walter, you, and whoever else actually decides this to consider hosting the main repository at an external place like GitHub or Mercurial, because DSource has been having some real troubles with stability, although it got slightly better again recently. The problem is somewhat alleviated when using a DVCS, but having availabilities the main source repositories is not quite the best form of advertisement for a language. Additionally, the UI of GitHub supports the scenario where only a few people (or Walter alone) actually have commit/push access to the main repository really well through cheap forks which stay logically connected to he main repository and merge requests. The ability to make comments on specific (lines in) commits, also in combination with pull requests, is awesome as well. I would also like to suggest Git over Mercurial, though this is mostly personal preference - it is used more widely, it has GitHub and Gitorious (I'm having a hard time finding Bitbucket comparable personally), it's proven to work well in settings where the main tree is managed by a single person (->Linux), it tries not artificially restricting you as much as possible (something I imagine Walter might like), . - but again, it's probably a matter of taste, I don't want to start a flamewar here. I've never used github, but I have used bitbucket and I truly, truly hate it. Horribly implemented site and an honest pain in the ass to use. Jan 06 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Nick Sabalausky Wrote: I've never used github, but I have used bitbucket and I truly, truly hate it. Horribly implemented site and an honest pain in the ass to use. I've never really used bitbucket, but I don't know how it could be any worse to use then dsource. If you ignore all the features Dsource doesn't have, it feels about the same to me. Jan 06 2011 "Nick Sabalausky" <a a.a> writes: "Jesse Phillips" <jessekphillips+D gmail.com> wrote in message news:ig62kh$h71$1 digitalmars.com... Nick Sabalausky Wrote: I've never used github, but I have used bitbucket and I truly, truly hate it. Horribly implemented site and an honest pain in the ass to use. I've never really used bitbucket, but I don't know how it could be any worse to use then dsource. If you ignore all the features Dsource doesn't have, it feels about the same to me. The features in DSource generally *just work* (except when the whole server is down, of course). With BitBucket, I tried to post a bug report for xfbuild one time (and I'm pretty sure there was another project too) and the damn thing just wouldn't work. And the text-entry box was literally two lines high. Kept trying and eventually I got one post through, but it was all garbled. So I kept trying more and nothing would show up, so I gave up. Came back a day later and there were a bunch of duplicate posts. Gah. And yea, that was just the bug tracker, but it certainly didn't instill any confidence in anything else about the site. And I'm not certain, but I seem to recall some idiotic pains in the ass when trying to sign up for an account, too. With DSource, as long as the server is up, everything's always worked for me...Well...except now that I think of it, I've never been able to edit the roadmap or edit the entries in the bug-tracker's "components" field for any of the projects I admin. Although, I can live without that. Jan 07 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Oh, yeah that would be annoying. I haven't done much with the github website but haven't had issues like that. About the only thing that makes github a little annoying at first is that you have to use public/private key pairs to do any pushing to a repo. But I haven't had any issues creating/using them from Linux/Windows. You can associate multiple public keys with your account so you don't need to take your private key everywhere with you. They can also be deleted so you could have temporary ones. Nick Sabalausky Wrote: The features in DSource generally *just work* (except when the whole server is down, of course). With BitBucket, I tried to post a bug report for xfbuild one time (and I'm pretty sure there was another project too) and the damn thing just wouldn't work. And the text-entry box was literally two lines high. Kept trying and eventually I got one post through, but it was all garbled. So I kept trying more and nothing would show up, so I gave up. Came back a day later and there were a bunch of duplicate posts. Gah. And yea, that was just the bug tracker, but it certainly didn't instill any confidence in anything else about the site. And I'm not certain, but I seem to recall some idiotic pains in the ass when trying to sign up for an account, too. With DSource, as long as the server is up, everything's always worked for me...Well...except now that I think of it, I've never been able to edit the roadmap or edit the entries in the bug-tracker's "components" field for any of the projects I admin. Although, I can live without that. Jan 07 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 07/01/2011 00:34, David Nadlinger wrote: On 1/6/11 11:47 PM, Andrei Alexandrescu wrote: Mercurial on dsource.org … Personally, I'd really like to persuade Walter, you, and whoever else actually decides this to consider hosting the main repository at an external place like GitHub or Mercurial, because DSource has been having some real troubles with stability, although it got slightly better again recently. The problem is somewhat alleviated when using a DVCS, but having availabilities the main source repositories is not quite the best form of advertisement for a language. Additionally, the UI of GitHub supports the scenario where only a few people (or Walter alone) actually have commit/push access to the main repository really well through cheap forks which stay logically connected to he main repository and merge requests. The ability to make comments on specific (lines in) commits, also in combination with pull requests, is awesome as well. I have to agree and reiterate this point. The issue of whether it is worthwhile for D to move to a DVCS (and which one of the two) is definitely a good thing to consider, but the issue of DSource vs. other code hosting sites is also quite a relevant one. (And not just for DMD but for any project.) I definitely thank Brad for his support and work on DSource, however I question if it is the best way to go for medium or large-sized D projects. Other hosting sites will simply offer better/more features and/or support, stability, less bugs, spam-protection, etc.. What we have here is exactly the same issue of NIH syndrome vs DRY, but applied to hosting and development infrastructure instead of the code itself. But I think the principle applies just the same. -- Bruno Medeiros - Software Engineer Jan 28 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 28.01.2011 14:07, schrieb Bruno Medeiros: On 07/01/2011 00:34, David Nadlinger wrote: On 1/6/11 11:47 PM, Andrei Alexandrescu wrote: Mercurial on dsource.org … Personally, I'd really like to persuade Walter, you, and whoever else actually decides this to consider hosting the main repository at an external place like GitHub or Mercurial, because DSource has been having some real troubles with stability, although it got slightly better again recently. The problem is somewhat alleviated when using a DVCS, but having availabilities the main source repositories is not quite the best form of advertisement for a language. Additionally, the UI of GitHub supports the scenario where only a few people (or Walter alone) actually have commit/push access to the main repository really well through cheap forks which stay logically connected to he main repository and merge requests. The ability to make comments on specific (lines in) commits, also in combination with pull requests, is awesome as well. I have to agree and reiterate this point. The issue of whether it is worthwhile for D to move to a DVCS (and which one of the two) is definitely a good thing to consider, but the issue of DSource vs. other code hosting sites is also quite a relevant one. (And not just for DMD but for any project.) I definitely thank Brad for his support and work on DSource, however I question if it is the best way to go for medium or large-sized D projects. Other hosting sites will simply offer better/more features and/or support, stability, less bugs, spam-protection, etc.. What we have here is exactly the same issue of NIH syndrome vs DRY, but applied to hosting and development infrastructure instead of the code itself. But I think the principle applies just the same. D has already moved to github, see D.announce :) Jan 28 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 28/01/2011 13:13, Daniel Gibson wrote: Am 28.01.2011 14:07, schrieb Bruno Medeiros: On 07/01/2011 00:34, David Nadlinger wrote: On 1/6/11 11:47 PM, Andrei Alexandrescu wrote: Mercurial on dsource.org … Personally, I'd really like to persuade Walter, you, and whoever else actually decides this to consider hosting the main repository at an external place like GitHub or Mercurial, because DSource has been having some real troubles with stability, although it got slightly better again recently. The problem is somewhat alleviated when using a DVCS, but having availabilities the main source repositories is not quite the best form of advertisement for a language. Additionally, the UI of GitHub supports the scenario where only a few people (or Walter alone) actually have commit/push access to the main repository really well through cheap forks which stay logically connected to he main repository and merge requests. The ability to make comments on specific (lines in) commits, also in combination with pull requests, is awesome as well. I have to agree and reiterate this point. The issue of whether it is worthwhile for D to move to a DVCS (and which one of the two) is definitely a good thing to consider, but the issue of DSource vs. other code hosting sites is also quite a relevant one. (And not just for DMD but for any project.) I definitely thank Brad for his support and work on DSource, however I question if it is the best way to go for medium or large-sized D projects. Other hosting sites will simply offer better/more features and/or support, stability, less bugs, spam-protection, etc.. What we have here is exactly the same issue of NIH syndrome vs DRY, but applied to hosting and development infrastructure instead of the code itself. But I think the principle applies just the same. D has already moved to github, see D.announce :) I know, I know. :) (I am up-to-date on D.announce, just not on "D" and "D.bugs") I still wanted to make that point though. First, for retrospection, but also because it may still apply to a few other DSource projects (current or future ones). -- Bruno Medeiros - Software Engineer Jan 28 2011 retard <re tard.com.invalid> writes: Fri, 28 Jan 2011 15:03:24 +0000, Bruno Medeiros wrote: I know, I know. :) (I am up-to-date on D.announce, just not on "D" and "D.bugs") I still wanted to make that point though. First, for retrospection, but also because it may still apply to a few other DSource projects (current or future ones). You don't need to read every post here. Reading every bug report is just stupid.. but it's not my problem. It just means that the rest of us have less competition in everyday situations (getting women, work offers, and so on) Jan 28 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 28/01/2011 21:14, retard wrote: Fri, 28 Jan 2011 15:03:24 +0000, Bruno Medeiros wrote: I know, I know. :) (I am up-to-date on D.announce, just not on "D" and "D.bugs") I still wanted to make that point though. First, for retrospection, but also because it may still apply to a few other DSource projects (current or future ones). You don't need to read every post here. Reading every bug report is just stupid.. but it's not my problem. It just means that the rest of us have less competition in everyday situations (getting women, work offers, and so on) I don't read every bug report, I only (try to) read the titles and see if it's something interesting, for example something that might impact the design of the language and is just not a pure implementation issue. Still, yes, I may be spending too much time on the NG (especially for someone who doesn't skip the 8 hours of sleep), but the bottleneck at the moment is writing posts, especially those that involve arguments. They are an order of magnitude more "expensive" than reading posts. -- Bruno Medeiros - Software Engineer Feb 01 2011 Jacob Carlborg <doob me.com> writes: On 2011-01-06 23:26, Nick Sabalausky wrote: "Daniel Gibson"<metalcaedes gmail.com> wrote in message news:ig57ar$1gn9$1 digitalmars.com... Am 06.01.2011 20:46, schrieb Walter Bright: Russel Winder wrote: Pity, because using one of Mercurial, Bazaar or Git instead of Subversion is likely the best and fastest way of getting more quality contributions to review. Although only anecdotal in every case where a team has switched to DVCS from CVCS -- except in the case of closed projects, obviously -- it has opened things up to far more people to provide contributions. Subversion is probably now the single biggest barrier to getting input on system evolution. A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. It's not SVN but trac doing this. And trac's mercurial plugin seems to support that as well: http://trac.edgewall.org/wiki/TracMercurial#MercurialChangesets Bitbucket also supports that kind of view, see for example: https://bitbucket.org/goshawk/gdc/changeset/44b6978e5f6c The GitPlugin should support that as well, if I interpret the feature list correctly: http://trac-hacks.org/wiki/GitPlugin Dsource seems to support both git and mercurial, but I don't know which projects use them, else I'd them as examples to see how those trac plugins work in real life. DDMD uses Mercurial on DSource: http://www.dsource.org/projects/ddmd I've been using Mercurial for all my projects on dsource and some other projects not on dsource. All DWT projects uses Mercurial as well. -- /Jacob Carlborg Jan 08 2011 Eric Poggel <dnewsgroup2 yage3d.net> writes: On 1/6/2011 3:03 PM, Daniel Gibson wrote: Dsource seems to support both git and mercurial, but I don't know which projects use them, else I'd them as examples to see how those trac plugins work in real life. I stumbpled across this url the other day: http://hg.dsource.org/ Seems to list mercurial projects. I couldn't find a similar one for git. Jan 06 2011 Robert Clipsham <robert octarineparrot.com> writes: On 06/01/11 19:46, Walter Bright wrote: One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 That's Trac, not SVN doing it - all other version control systems do a similar thing. where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. This is easily doable with both mercurial and git. If you use a tool like bitbucket or github (which I *highly* recommend you do, opens up a huge community to you, I know of several cases where projects have been discovered through them and gained contributors etc). One thing I would dearly like is to be able to merge branches using meld. http://meld.sourceforge.net/ There's guides for doing this in both mercurial and git, you generally just run one command one time and forget about it, any time you do git/hg merge it will then automatically use meld or any other tool you discover. -- Robert http://octarineparrot.com/ Jan 06 2011 "Nick Sabalausky" <a a.a> writes: "Robert Clipsham" <robert octarineparrot.com> wrote in message news:ig58tk$24bn$1 digitalmars.com... On 06/01/11 19:46, Walter Bright wrote: One thing I like a lot about svn is this: http://www.dsource.org/projects/dmd/changeset/291 That's Trac, not SVN doing it - all other version control systems do a similar thing. where the web view will highlight the revision's changes. Does git or mercurial do that? The other thing I like a lot about gif is it sends out emails for each checkin. This is easily doable with both mercurial and git. If you use a tool like bitbucket or github (which I *highly* recommend you do, opens up a huge community to you, I know of several cases where projects have been discovered through them and gained contributors etc). I would STRONGLY recommend against using any site that requires a valid non-mailinator email address just to do basic things like post a bug report. I'm not sure exactly which ones are and aren't like that, but many free project hosting sites are like that and it's an absolutely inexcusable barrier. And of course everyone knows how I'd feel about any site that required JS for anything that obviously didn't need JS. ;) One other random thought: I'd really hate to use a system that didn't have short sequential changeset identifiers. I think Hg does have that, although I don't think all Hg interfaces actually use it, just some. Jan 06 2011 =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes: Nick Sabalausky wrote: One other random thought: I'd really hate to use a system that didn't h= ave=20 short sequential changeset identifiers. I think Hg does have that, alth= ough=20 I don't think all Hg interfaces actually use it, just some. =20 Hg does support short identifiers (either short hashes or sequential numbers). AFAIK all commands use them (I've never used a command that did not). Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr Jan 07 2011 Trass3r <un known.com> writes: One other random thought: I'd really hate to use a system that didn't have short sequential changeset identifiers. I think Hg does have that, although I don't think all Hg interfaces actually use it, just some. It's built into Mercurial. Jan 08 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Thu, 06 Jan 2011 21:46:47 +0200, Walter Bright <newshound2 digitalmars.com> wrote: A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. Walter, if you do make the move to git (or in generally switch DVCSes), please make it so that the backend is not in the same repository in the frontend. Since the backend has severe redistribution restrictions, the compiler repository can't be simply forked and published. FWIW I'm quite in favor of a switch to git (even better if you choose GitHub, as was discussed in another thread). I had to go through great lengths to set up a private git mirror of the dmd repository, as dsource kept dropping my git-svnimport connections, and it took forever. -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 06 2011 Russel Winder <russel russel.org.uk> writes: On Fri, 2011-01-07 at 00:31 +0200, Vladimir Panteleev wrote: [ . . . ] FWIW I'm quite in favor of a switch to git (even better if you choose = =20 GitHub, as was discussed in another thread). I had to go through great = =20 lengths to set up a private git mirror of the dmd repository, as dsource = =20 kept dropping my git-svnimport connections, and it took forever. svnimport is for one-off transformation, i.e. for moving from Subversion to Git. Using git-svn is the way to have Git as your Subversion client -- though you have to remember that it always rebases so your repository cannot be a peer in a Git repository group, it is only a Subversion client. The same applies for Mercurial but not for Bazaar, which can treat the Subversion repository in a Bazaar branch poeer group. There are ways of bridging in Git and Mercurial, but it gets painful. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 07 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Fri, 07 Jan 2011 12:15:37 +0200, Russel Winder <russel russel.org.uk> wrote: On Fri, 2011-01-07 at 00:31 +0200, Vladimir Panteleev wrote: [ . . . ] FWIW I'm quite in favor of a switch to git (even better if you choose GitHub, as was discussed in another thread). I had to go through great lengths to set up a private git mirror of the dmd repository, as dsource kept dropping my git-svnimport connections, and it took forever. svnimport is for one-off transformation, i.e. for moving from Subversion to Git. Using git-svn is the way to have Git as your Subversion client -- though you have to remember that it always rebases so your repository cannot be a peer in a Git repository group, it is only a Subversion client. The same applies for Mercurial but not for Bazaar, which can treat the Subversion repository in a Bazaar branch poeer group. There are ways of bridging in Git and Mercurial, but it gets painful. Sorry, I actually meant git-svn, I confused the two. -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 07 2011 Russel Winder <russel russel.org.uk> writes: On Thu, 2011-01-06 at 11:46 -0800, Walter Bright wrote: A couple months back, I did propose moving to git on the dmd internals ma= iling=20 list, and nobody was interested. That surprises me. Shifting from Subversion to any of Mercurial, Bazaar or Git, is such a huge improvement in tooling. Especially for support of feature branches. One thing I like a lot about svn is this: =20 http://www.dsource.org/projects/dmd/changeset/291 =20 where the web view will highlight the revision's changes. Does git or mer= curial=20 do that? The other thing I like a lot about gif is it sends out emails fo= r each=20 checkin. This is a feature of the renderer not the version control system. This is not Subversion at work, this is Trac at work. As far as I am aware the Subversion, Mercurial, Git and Bazaar backends for Trac all provide this facility. =20 One thing I would dearly like is to be able to merge branches using meld. =20 http://meld.sourceforge.net/ Why? Mercurial, Bazaar and Git all support a variety of three-way merge tools including meld, but the whole point of branching and merging is that you don't do it manually -- except in Subversion where merging branching remains a problem.=20 With Mercurial, Bazaar and Git, if you accept a changeset from a branch you jsut merge it, e.g. git merge some-feature-branch job done. If you want to amend the changeset before committing to HEAD then create a feature branch, merge the incoming changeset to the feature branch, work on it till satisfied, merge to HEAD. The only time I used meld these days is to process merge conflicts, not to handle merging per se.=20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 07 2011 Walter Bright <newshound2 digitalmars.com> writes: Russel Winder wrote: One thing I would dearly like is to be able to merge branches using meld. http://meld.sourceforge.net/ Why? Because meld makes it easy to review, selectively merge, and do a bit of editing all in one go. Mercurial, Bazaar and Git all support a variety of three-way merge tools including meld, but the whole point of branching and merging is that you don't do it manually -- except in Subversion where merging branching remains a problem. But I want to do it manually. With Mercurial, Bazaar and Git, if you accept a changeset from a branch you jsut merge it, e.g. git merge some-feature-branch job done. If you want to amend the changeset before committing to HEAD then create a feature branch, merge the incoming changeset to the feature branch, work on it till satisfied, merge to HEAD. The only time I used meld these days is to process merge conflicts, not to handle merging per se. I've always been highly suspicious of the auto-detection of a 3 way merge conflict. Jan 07 2011 Russel Winder <russel russel.org.uk> writes: Walter, On Fri, 2011-01-07 at 10:54 -0800, Walter Bright wrote: Russel Winder wrote: One thing I would dearly like is to be able to merge branches using me= ld. http://meld.sourceforge.net/ =20 Why? =20 Because meld makes it easy to review, selectively merge, and do a bit of = editing=20 all in one go. Hummm . . . these days that is seen as being counter-productive to having a full and complete record of the evolution of a project. These days it is assumed that a reviewed changeset is committed as is and then further amendments made as a separate follow-up changeset. A core factor here is of attribution and publicity of who did what. By committing reviewed changesets before amending them, the originator of the changeset is noted as the author of the changeset in the history. As I understand the consequences of the above system, you are always shown as the committer of every change -- but I may just have got this wrong, I haven't actually looked at the DMD repository. =20 Mercurial, Bazaar and Git all support a variety of three-way merge tool= s including meld, but the whole point of branching and merging is that yo= u don't do it manually -- except in Subversion where merging branching remains a problem. =20 But I want to do it manually. Clearly I don't understand your workflow. When I used Subversion, its merge capabilities were effectively none -- and as I understand it, things have not got any better in reality despite all the publicity about new merge support. So handling changesets from branches and elsewhere always had to be a manual activity. Maintaining a truly correct history was effectively impossible. Now with Bazaar, Mercurial and Git, merge is so crucial to the very essence of what these systems do that I cannot conceive of manually merging except to resolve actual conflicts. Branch and merge is so trivially easy in all of Bazaar, Mercurial and Git, that it changes workflows. Reviewing changesets is still a crucially important thing, but merging them should not be part of that process. =20 With Mercurial, Bazaar and Git, if you accept a changeset from a branch you jsut merge it, e.g. =20 git merge some-feature-branch =20 job done. If you want to amend the changeset before committing to HEAD then create a feature branch, merge the incoming changeset to the feature branch, work on it till satisfied, merge to HEAD. =20 The only time I used meld these days is to process merge conflicts, not to handle merging per se.=20 =20 I've always been highly suspicious of the auto-detection of a 3 way merge= conflict. I have always been highly suspicious that compilers can optimize my code better than I can ;-) --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 08 2011 Walter Bright <newshound2 digitalmars.com> writes: Russel Winder wrote: Walter, On Fri, 2011-01-07 at 10:54 -0800, Walter Bright wrote: Russel Winder wrote: One thing I would dearly like is to be able to merge branches using meld. http://meld.sourceforge.net/ Why? Because meld makes it easy to review, selectively merge, and do a bit of editing all in one go. Hummm . . . these days that is seen as being counter-productive to having a full and complete record of the evolution of a project. These days it is assumed that a reviewed changeset is committed as is and then further amendments made as a separate follow-up changeset. A core factor here is of attribution and publicity of who did what. By committing reviewed changesets before amending them, the originator of the changeset is noted as the author of the changeset in the history. As I understand the consequences of the above system, you are always shown as the committer of every change -- but I may just have got this wrong, I haven't actually looked at the DMD repository. I never thought of that. Mercurial, Bazaar and Git all support a variety of three-way merge tools including meld, but the whole point of branching and merging is that you don't do it manually -- except in Subversion where merging branching remains a problem. But I want to do it manually. Clearly I don't understand your workflow. When I used Subversion, its merge capabilities were effectively none -- and as I understand it, things have not got any better in reality despite all the publicity about new merge support. So handling changesets from branches and elsewhere always had to be a manual activity. Maintaining a truly correct history was effectively impossible. Now with Bazaar, Mercurial and Git, merge is so crucial to the very essence of what these systems do that I cannot conceive of manually merging except to resolve actual conflicts. Branch and merge is so trivially easy in all of Bazaar, Mercurial and Git, that it changes workflows. Reviewing changesets is still a crucially important thing, but merging them should not be part of that process. I never thought of it that way before. With Mercurial, Bazaar and Git, if you accept a changeset from a branch you jsut merge it, e.g. git merge some-feature-branch job done. If you want to amend the changeset before committing to HEAD then create a feature branch, merge the incoming changeset to the feature branch, work on it till satisfied, merge to HEAD. The only time I used meld these days is to process merge conflicts, not to handle merging per se. I've always been highly suspicious of the auto-detection of a 3 way merge conflict. I have always been highly suspicious that compilers can optimize my code better than I can ;-) You should be! Jan 08 2011 Iain Buclaw <ibuclaw ubuntu.com> writes: == Quote from Walter Bright (newshound2 digitalmars.com)'s article Russel Winder wrote: One thing I would dearly like is to be able to merge branches using meld. http://meld.sourceforge.net/ Why? Because meld makes it easy to review, selectively merge, and do a bit of editing all in one go. I whole heartedly agree with this. Jan 08 2011 Russel Winder <russel russel.org.uk> writes: On Thu, 2011-01-06 at 11:46 -0800, Walter Bright wrote: [ . . . ] do that? The other thing I like a lot about gif is it sends out emails fo= r each=20 checkin. Sorry, I forget to answer this question in my previous reply. Mercurial, Bazaar, and Git all have various hooks for the various branch and repository events. Commit emails are trivial in all of them. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 07 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Friday 07 January 2011 02:09:31 Russel Winder wrote: On Thu, 2011-01-06 at 11:46 -0800, Walter Bright wrote: A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. That surprises me. Shifting from Subversion to any of Mercurial, Bazaar or Git, is such a huge improvement in tooling. Especially for support of feature branches. Part of that was probably because not that many people pay attention to the dmd internals mailing list. I don't recall seeing that post, and I do pay at least some attention to that list. I would have been for it, but then again, I'm also not one of the dmd developers - not that there are many. Personally, I'd love to see dmd, druntime, and Phobos switch over to git, since that's what I typically use. It would certainly be an improvement over subversion. But I can't compare it to other systems such as Mercurial and Bazaar, because I've never used them. Really, for me personally, git works well enough that I've had no reason to check any others out. I can attest though that git is a huge improvement over subversion. Before I started using git, I almost never used source control on my own projects, because it was too much of a pain. With git, it's extremely easy to set up a new repository, it doesn't pollute the whole source tree with source control files, and it doesn't force me to have a second copy of the repository somewhere else. So, thanks to git, I now use source control all the time. - Jonathan M Davis Jan 07 2011 "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes: On Thu, 06 Jan 2011 11:46:47 -0800, Walter Bright wrote: Russel Winder wrote: Pity, because using one of Mercurial, Bazaar or Git instead of Subversion is likely the best and fastest way of getting more quality contributions to review. Although only anecdotal in every case where a team has switched to DVCS from CVCS -- except in the case of closed projects, obviously -- it has opened things up to far more people to provide contributions. Subversion is probably now the single biggest barrier to getting input on system evolution. A couple months back, I did propose moving to git on the dmd internals mailing list, and nobody was interested. I proposed the same on the Phobos list in May, but the discussion went nowhere. It seemed the general consensus was that SVN was "good enough". -Lars Jan 07 2011 Don <nospam nospam.com> writes: Walter Bright wrote: Nick Sabalausky wrote: "Caligo" <iteronvexor gmail.com> wrote in message news:mailman.451.1294306555.4748.digitalmars-d puremagic.com... On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright <newshound2 digitalmars.com>wrote: That's pretty much what I'm afraid of, losing my grip on how the whole thing works if there are multiple dmd committers. Perhaps using a modern SCM like Git might help? Everyone could have (and should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch. I don't, either. There's no difference if you're only making one patch, but once you make more, there's a significant difference. I can generally manage to fix about five bugs at once, before they start to interfere with each other. After that, I have to wait for some of the bugs to be integrated into the trunk, or else start discarding changes from my working copy. Occasionally I also use my own DMD local repository, but it doesn't work very well (gets out of sync with the trunk too easily, because SVN isn't really set up for that development model). I think that we should probably move to Mercurial eventually. I think there's potential for two benefits: (1) quicker for you to merge changes in; (2) increased collaboration between patchers. But due to the pain in changing the developement model, I don't think it's a change we should make in the near term. Jan 06 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/6/11 9:18 AM, Don wrote: Walter Bright wrote: Nick Sabalausky wrote: "Caligo" <iteronvexor gmail.com> wrote in message news:mailman.451.1294306555.4748.digitalmars-d puremagic.com... On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright <newshound2 digitalmars.com>wrote: That's pretty much what I'm afraid of, losing my grip on how the whole thing works if there are multiple dmd committers. Perhaps using a modern SCM like Git might help? Everyone could have (and should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch. I don't, either. There's no difference if you're only making one patch, but once you make more, there's a significant difference. I can generally manage to fix about five bugs at once, before they start to interfere with each other. After that, I have to wait for some of the bugs to be integrated into the trunk, or else start discarding changes from my working copy. Occasionally I also use my own DMD local repository, but it doesn't work very well (gets out of sync with the trunk too easily, because SVN isn't really set up for that development model). I think that we should probably move to Mercurial eventually. I think there's potential for two benefits: (1) quicker for you to merge changes in; (2) increased collaboration between patchers. But due to the pain in changing the developement model, I don't think it's a change we should make in the near term. What are the advantages of Mercurial over git? (git does allow multiple branches.) Andrei Jan 06 2011 bioinfornatics <bioinfornatics fedoraproject.org> writes: i have used svn, cvs a little, mercurial and git and i prefer git for me is better way Very powerfull for managing branch and do merge. Chery pick is too very powerfull. And yes git allow multi branch Jan 06 2011 =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes: Andrei Alexandrescu wrote: What are the advantages of Mercurial over git? (git does allow multiple= branches.) =20 Here's a comparison. Although I am partial to Mercurial, I have tried to be fair. Some of the points are in favor of Mercurial, some in favor of Git, and some are simply differences I noted (six of one, half a dozen of the other): http://www.digitalmars.com/pnews/read.php?server=3Dnews.digitalmars.com&g= roup=3Ddigitalmars.D&artnum=3D91657 An extra point I did not raise at the time: Git is deliberately engineered to be as different from CVS/SVN as possible (quoting Wikipedia: "Take CVS as an example of what /not/ to do; if in doubt, make the exact opposite decision"). IMO this makes it a poor choice when migrating from SVN. Mercurial (or Bazaar) would be much more comfortable. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr Jan 06 2011 David Nadlinger <see klickverbot.at> writes: On 1/6/11 8:19 PM, "Jérôme M. Berger" wrote: Here's a comparison. Although I am partial to Mercurial, I have tried to be fair. Jérôme, I'm usually not the one arguing ad hominem, but are you sure that you really tried to be fair? If you want to make subjective statements about Mercurial, that you personally like it better because of this and that reason, that's fine, but please don't try to make it look like an objective comparison. A fair part of the arguments you made in the linked post are objectively wrong, which is understandable if you're mainly a Mercurial user – but please don't make it look like you had done more in-depth research regarding both to other people… For example, you dwelt on being able to »hg pull help« being an advantage over Git – where the equivalent command reads »git pull --help«. Are you serious?! By the way, at least for Mercurial 1.6, you need to pass »--help« as a »proper« argument using two dashes as well, your command does not work (anymore). An extra point I did not raise at the time: Git is deliberately engineered to be as different from CVS/SVN as possible (quoting Wikipedia: "Take CVS as an example of what /not/ to do; if in doubt, make the exact opposite decision"). You missed the »… quote Torvalds, speaking somewhat tongue-in-cheek« part – in the talk the quote is from, Linus Torvalds was making the point that centralized SCMs just can't keep up with distributed concepts, and as you probably know, he really likes to polarize. In the same talk, he also mentioned Mercurial being very similar to Git – does that make it an unfavorable switch as well in your eyes? I hope not… David Jan 06 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 06/01/2011 19:19, "Jérôme M. Berger" wrote: Andrei Alexandrescu wrote: What are the advantages of Mercurial over git? (git does allow multiple branches.) I've also been mulling over whether to try out and switch away from Subversion to a DVCS, but never went ahead cause I've also been undecided about Git vs. Mercurial. So this whole discussion here in the NG has been helpful, even though I rarely use branches, if at all. However, there is an important issue for me that has not been mentioned ever, I wonder if other people also find it relevant. It annoys me a lot in Subversion, and basically it's the aspect where if you delete, rename, or copy a folder under version control in a SVN working copy, without using the SVN commands, there is a high likelihood your working copy will break! It's so annoying, especially since sometimes no amount of svn revert, cleanup, unlock, override and update, etc. will fix it. I just had one recently where I had to delete and re-checkout the whole project because it was that broken. Other situations also seem to cause this, even when using SVN tooling (like partially updating from a commit that delete or moves directories, or something like that) It's just so brittle. I think it may be a consequence of the design aspect of SVN where each subfolder of a working copy is a working copy as well (and each subfolder of repository is a repository as well) Anyways, I hope Mercurial and Git are better at this, I'm definitely going to try them out with regards to this. -- Bruno Medeiros - Software Engineer Jan 28 2011 Michel Fortin <michel.fortin michelf.com> writes: On 2011-01-28 11:29:49 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: I've also been mulling over whether to try out and switch away from Subversion to a DVCS, but never went ahead cause I've also been undecided about Git vs. Mercurial. So this whole discussion here in the NG has been helpful, even though I rarely use branches, if at all. However, there is an important issue for me that has not been mentioned ever, I wonder if other people also find it relevant. It annoys me a lot in Subversion, and basically it's the aspect where if you delete, rename, or copy a folder under version control in a SVN working copy, without using the SVN commands, there is a high likelihood your working copy will break! It's so annoying, especially since sometimes no amount of svn revert, cleanup, unlock, override and update, etc. will fix it. I just had one recently where I had to delete and re-checkout the whole project because it was that broken. Other situations also seem to cause this, even when using SVN tooling (like partially updating from a commit that delete or moves directories, or something like that) It's just so brittle. I think it may be a consequence of the design aspect of SVN where each subfolder of a working copy is a working copy as well (and each subfolder of repository is a repository as well) Anyways, I hope Mercurial and Git are better at this, I'm definitely going to try them out with regards to this. Git doesn't care how you move your files around. It track files by their content. If you rename a file and most of the content stays the same, git will see it as a rename. If most of the file has changed, it'll see it as a new file (with the old one deleted). There is 'git mv', but it's basically just a shortcut for moving the file, doing 'git rm' on the old path and 'git add' on the new path. I don't know about Mercurial. -- Michel Fortin michel.fortin michelf.com http://michelf.com/ Jan 28 2011 =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes: Michel Fortin wrote: On 2011-01-28 11:29:49 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: =20 I've also been mulling over whether to try out and switch away from Subversion to a DVCS, but never went ahead cause I've also been undecided about Git vs. Mercurial. So this whole discussion here in the NG has been helpful, even though I rarely use branches, if at all.= However, there is an important issue for me that has not been mentioned ever, I wonder if other people also find it relevant. It annoys me a lot in Subversion, and basically it's the aspect where if you delete, rename, or copy a folder under version control in a SVN working copy, without using the SVN commands, there is a high likelihood your working copy will break! It's so annoying, especially since sometimes no amount of svn revert, cleanup, unlock, override and= update, etc. will fix it. I just had one recently where I had to delete and re-checkout the whole project because it was that broken. Other situations also seem to cause this, even when using SVN tooling (like partially updating from a commit that delete or moves directories, or something like that) It's just so brittle. I think it may be a consequence of the design aspect of SVN where each= subfolder of a working copy is a working copy as well (and each subfolder of repository is a repository as well) Anyways, I hope Mercurial and Git are better at this, I'm definitely going to try them out with regards to this. =20 Git doesn't care how you move your files around. It track files by thei= r content. If you rename a file and most of the content stays the same, git will see it as a rename. If most of the file has changed, it'll see= it as a new file (with the old one deleted). There is 'git mv', but it'= s basically just a shortcut for moving the file, doing 'git rm' on the ol= d path and 'git add' on the new path. =20 I don't know about Mercurial. =20 Mercurial can record renamed or copied files after the fact (simply pass the -A option to "hg cp" or "hg mv"). It also has the "addremove" command which will automatically remove any missing files and add any unknown non-ignored files. Addremove can detect renamed files if they are similar enough to the old file (the similarity level is configurable) but it will not detect copies. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr Jan 29 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 29/01/2011 10:02, "Jérôme M. Berger" wrote: Michel Fortin wrote: On 2011-01-28 11:29:49 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: I've also been mulling over whether to try out and switch away from Subversion to a DVCS, but never went ahead cause I've also been undecided about Git vs. Mercurial. So this whole discussion here in the NG has been helpful, even though I rarely use branches, if at all. However, there is an important issue for me that has not been mentioned ever, I wonder if other people also find it relevant. It annoys me a lot in Subversion, and basically it's the aspect where if you delete, rename, or copy a folder under version control in a SVN working copy, without using the SVN commands, there is a high likelihood your working copy will break! It's so annoying, especially since sometimes no amount of svn revert, cleanup, unlock, override and update, etc. will fix it. I just had one recently where I had to delete and re-checkout the whole project because it was that broken. Other situations also seem to cause this, even when using SVN tooling (like partially updating from a commit that delete or moves directories, or something like that) It's just so brittle. I think it may be a consequence of the design aspect of SVN where each subfolder of a working copy is a working copy as well (and each subfolder of repository is a repository as well) Anyways, I hope Mercurial and Git are better at this, I'm definitely going to try them out with regards to this. Git doesn't care how you move your files around. It track files by their content. If you rename a file and most of the content stays the same, git will see it as a rename. If most of the file has changed, it'll see it as a new file (with the old one deleted). There is 'git mv', but it's basically just a shortcut for moving the file, doing 'git rm' on the old path and 'git add' on the new path. I don't know about Mercurial. Mercurial can record renamed or copied files after the fact (simply pass the -A option to "hg cp" or "hg mv"). It also has the "addremove" command which will automatically remove any missing files and add any unknown non-ignored files. Addremove can detect renamed files if they are similar enough to the old file (the similarity level is configurable) but it will not detect copies. Jerome Indeed, that's want I found out now that I tried Mercurial. So that's really nice (especially the "addremove" command), it's actually motivation enough for me to switch to Mercurial or Git, as it's a major annoyance in SVN. I've learned a few more things recently: there's a minor issue with Git and Mercurial in that they both are not able to record empty directories. A very minor annoyance (it's workaround-able), but still conceptually lame, I mean, directories are resources too! It's curious that the wiki pages for both Git and Mercurial on this issue are exactly the same, word by word most of them: http://mercurial.selenic.com/wiki/MarkEmptyDirs https://git.wiki.kernel.org/index.php/MarkEmptyDirs (I guess it's because they were written by the same guy) A more serious issue that I learned (or rather forgotten about before and remembered now) is the whole DVCSes keep the whole repository history locally aspect, which has important ramifications. If the repository is big, although disk space may not be much of an issue, it's a bit annoying when copying the repository locally(*), or cloning it from the internet and thus having to download large amounts of data. For example in the DDT Eclipse IDE I keep the project dependencies (https://svn.codespot.com/a/eclipselabs.org/ddt/trunk/org.dsourc .ddt-build/target/) on source control, which is 141Mb total on a single revision, and they might change ever semester or so... I'm still not sure what to do about this. I may split this part of the project into a separate Mercurial repository, although I do lose some semantic information because of this: a direct association between each revision in the source code projects, and the corresponding revision in the dependencies project. Conceptually I would want this to be a single repository. (*) Yeah, I know Mercurial and Git may use hardlinks to speed up the cloning process, even on Windows, but that solution is not suitable to me, as I my workflow is usually to copy entire Eclipse workspaces when I want to "branch" on some task. Doesn't happen that often though. -- Bruno Medeiros - Software Engineer Feb 01 2011 David Nadlinger <see klickverbot.at> writes: On 2/1/11 2:44 PM, Bruno Medeiros wrote: […] a direct association between each revision in the source code projects, and the corresponding revision in the dependencies project. […] With Git, you could use submodules for that task – I don't know if something similar exists for Mercurial. David Feb 01 2011 foobar <foo bar.com> writes: Bruno Medeiros Wrote: On 29/01/2011 10:02, "Jérôme M. Berger" wrote: Michel Fortin wrote: On 2011-01-28 11:29:49 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: I've also been mulling over whether to try out and switch away from Subversion to a DVCS, but never went ahead cause I've also been undecided about Git vs. Mercurial. So this whole discussion here in the NG has been helpful, even though I rarely use branches, if at all. However, there is an important issue for me that has not been mentioned ever, I wonder if other people also find it relevant. It annoys me a lot in Subversion, and basically it's the aspect where if you delete, rename, or copy a folder under version control in a SVN working copy, without using the SVN commands, there is a high likelihood your working copy will break! It's so annoying, especially since sometimes no amount of svn revert, cleanup, unlock, override and update, etc. will fix it. I just had one recently where I had to delete and re-checkout the whole project because it was that broken. Other situations also seem to cause this, even when using SVN tooling (like partially updating from a commit that delete or moves directories, or something like that) It's just so brittle. I think it may be a consequence of the design aspect of SVN where each subfolder of a working copy is a working copy as well (and each subfolder of repository is a repository as well) Anyways, I hope Mercurial and Git are better at this, I'm definitely going to try them out with regards to this. Git doesn't care how you move your files around. It track files by their content. If you rename a file and most of the content stays the same, git will see it as a rename. If most of the file has changed, it'll see it as a new file (with the old one deleted). There is 'git mv', but it's basically just a shortcut for moving the file, doing 'git rm' on the old path and 'git add' on the new path. I don't know about Mercurial. Mercurial can record renamed or copied files after the fact (simply pass the -A option to "hg cp" or "hg mv"). It also has the "addremove" command which will automatically remove any missing files and add any unknown non-ignored files. Addremove can detect renamed files if they are similar enough to the old file (the similarity level is configurable) but it will not detect copies. Jerome Indeed, that's want I found out now that I tried Mercurial. So that's really nice (especially the "addremove" command), it's actually motivation enough for me to switch to Mercurial or Git, as it's a major annoyance in SVN. I've learned a few more things recently: there's a minor issue with Git and Mercurial in that they both are not able to record empty directories. A very minor annoyance (it's workaround-able), but still conceptually lame, I mean, directories are resources too! It's curious that the wiki pages for both Git and Mercurial on this issue are exactly the same, word by word most of them: http://mercurial.selenic.com/wiki/MarkEmptyDirs https://git.wiki.kernel.org/index.php/MarkEmptyDirs (I guess it's because they were written by the same guy) A more serious issue that I learned (or rather forgotten about before and remembered now) is the whole DVCSes keep the whole repository history locally aspect, which has important ramifications. If the repository is big, although disk space may not be much of an issue, it's a bit annoying when copying the repository locally(*), or cloning it from the internet and thus having to download large amounts of data. For example in the DDT Eclipse IDE I keep the project dependencies (https://svn.codespot.com/a/eclipselabs.org/ddt/trunk/org.dsourc .ddt-build/target/) on source control, which is 141Mb total on a single revision, and they might change ever semester or so... I'm still not sure what to do about this. I may split this part of the project into a separate Mercurial repository, although I do lose some semantic information because of this: a direct association between each revision in the source code projects, and the corresponding revision in the dependencies project. Conceptually I would want this to be a single repository. (*) Yeah, I know Mercurial and Git may use hardlinks to speed up the cloning process, even on Windows, but that solution is not suitable to me, as I my workflow is usually to copy entire Eclipse workspaces when I want to "branch" on some task. Doesn't happen that often though. -- Bruno Medeiros - Software Engineer You raised a valid concern regarding the local copy issue and it has already been taken care of in DVCSes: 1. git stores all the actual data in "blobs" which are compressed whereas svn stores everything in plain-text (including all the history!) 2. git stores and transfers deltas and not full files unlike svn 3. it's possible to wrap a bunch of commits into a "bundle" - a single compressed binary file. This file can be than downloaded and than you can git fetch from it. In general Git (and I assume mercurial as well) both needs way less space than comparable SVN repositories and is much faster in fetching from upstream compared to svn update. Try cloning your svn repository with git-svn and compare repository sizes.. Feb 01 2011 Walter Bright <newshound2 digitalmars.com> writes: Bruno Medeiros wrote: A more serious issue that I learned (or rather forgotten about before and remembered now) is the whole DVCSes keep the whole repository history locally aspect, which has important ramifications. If the repository is big, although disk space may not be much of an issue, I still find myself worrying about disk usage, despite being able to get a 2T drive these days for under a hundred bucks. Old patterns of thought die hard. Feb 01 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Tuesday, February 01, 2011 15:07:58 Walter Bright wrote: Bruno Medeiros wrote: A more serious issue that I learned (or rather forgotten about before and remembered now) is the whole DVCSes keep the whole repository history locally aspect, which has important ramifications. If the repository is big, although disk space may not be much of an issue, I still find myself worrying about disk usage, despite being able to get a 2T drive these days for under a hundred bucks. Old patterns of thought die hard. And some things will likely _always_ make disk usage a concern. Video would be a good example. If you have much video, even with good compression, it's going to take up a lot of space. Granted, there are _lots_ of use cases which just don't take up enough disk space to matter anymore, but you can _always_ find ways to use up disk space. Entertainingly, a fellow I know had a friend who joked that he could always hold all of his data in a shoebox. Originally, it was punch cards. Then it was 5 1/4" floppy disks. Then it was 3 1/2" floppy disks. Then it was CDs. Etc. Storage devices keep getting bigger and bigger, but we keep finding ways to fill them... - Jonathan M Davis Feb 01 2011 Brad Roberts <braddr slice-2.puremagic.com> writes: On Tue, 1 Feb 2011, Walter Bright wrote: Bruno Medeiros wrote: A more serious issue that I learned (or rather forgotten about before and remembered now) is the whole DVCSes keep the whole repository history locally aspect, which has important ramifications. If the repository is big, although disk space may not be much of an issue, I still find myself worrying about disk usage, despite being able to get a 2T drive these days for under a hundred bucks. Old patterns of thought die hard. For what it's worth, the sizes of the key git dirs on my box: dmd.git == 4.4 - 5.9M (depends on if the gc has run recently to re-pack new objects) druntime.git == 1.4 - 3.0M phobos.git == 5.1 - 6.7M The checked out copy of each of those is considerably more than the packed full history. The size, inclusive of full history and the checked out copy, after a make clean: dmd 15M druntime 4M phobos 16M Ie, essentially negligable. Later, Brad Feb 01 2011 Walter Bright <newshound2 digitalmars.com> writes: Brad Roberts wrote: Ie, essentially negligable. Yeah, and I caught myself worrying about the disk usage from having two clones of the git repository (one for D1, the other for D2). Feb 01 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: Bleh. I tried to use Git to update some of the doc files, but getting the thing to work will be a miracle. git can't find the public keys unless I use msysgit. Great. How exactly do I cd to D:\ ? So I try git-gui. Seems to work fine, I clone the forked repo and make a few changes. I try to commit, it says I have to update first. So I do that. *Error: crash crash crash*. I try to close the thing, it just keeps crashing. CTRL+ALT+DEL time.. Okay, I try another GUI package, GitExtensions. I make new public/private keys and add it to github, I'm about to clone but then I get this "fatal: The remote end hung up unexpectedly". I don't know what to say.. Feb 01 2011 Walter Bright <newshound2 digitalmars.com> writes: Andrej Mitrovic wrote: I don't know what to say.. Git is a Linux program and will never work right on Windows. The problems you're experiencing are classic ones I find whenever I attempt to use a Linux program that has been "ported" to Windows. Feb 01 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 2/2/11, Walter Bright <newshound2 digitalmars.com> wrote: Andrej Mitrovic wrote: I don't know what to say.. Git is a Linux program and will never work right on Windows. The problems you're experiencing are classic ones I find whenever I attempt to use a Linux program that has been "ported" to Windows. Yeah, I know what you mean. "Use my app on Windows too, it works! But you have to install this Linux simulator first, though". Is this why you've made your own version of make and microemacs for Windows? I honestly can't blame you. :) Feb 01 2011 Walter Bright <newshound2 digitalmars.com> writes: Andrej Mitrovic wrote: Is this why you've made your own version of make and microemacs for Windows? I honestly can't blame you. :) Microemacs floated around the intarnets for free back in the 80's, and I liked it because it was very small, fast, and customizable. Having an editor that fit in 50k was just the ticket for a floppy based system. Most code editors of the day were many times larger, took forever to load, etc. I wrote my own make because I needed one to sell and so couldn't use someone else's. Feb 01 2011 Brad Roberts <braddr puremagic.com> writes: On 2/1/2011 7:55 PM, Andrej Mitrovic wrote: On 2/2/11, Walter Bright <newshound2 digitalmars.com> wrote: Andrej Mitrovic wrote: I don't know what to say.. Git is a Linux program and will never work right on Windows. The problems you're experiencing are classic ones I find whenever I attempt to use a Linux program that has been "ported" to Windows. Yeah, I know what you mean. "Use my app on Windows too, it works! But you have to install this Linux simulator first, though". Is this why you've made your own version of make and microemacs for Windows? I honestly can't blame you. :) Of course, it forms a nice vicious circle. Without users, there's little incentive to fix and chances are there's fewer users reporting bugs. Sounds.. familiar. :) Feb 01 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 2/2/11, Walter Bright <newshound2 digitalmars.com> wrote: I've noticed you have "Version Control with Git" listed in your list of books. Did you just buy that recently, or were you secretly planning to switch to Git at the instant someone mentioned it? :p Feb 01 2011 Walter Bright <newshound2 digitalmars.com> writes: Andrej Mitrovic wrote: I've noticed you have "Version Control with Git" listed in your list of books. Did you just buy that recently, or were you secretly planning to switch to Git at the instant someone mentioned it? :p I listed it recently. Feb 01 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 2/2/11, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote: On 2/2/11, Walter Bright <newshound2 digitalmars.com> wrote: ...listed in your list... Crap.. I just made a 2-dimensional book list by accident. My bad. Feb 01 2011 David Nadlinger <see klickverbot.at> writes: On 2/2/11 3:17 AM, Andrej Mitrovic wrote: Bleh. I tried to use Git to update some of the doc files, but getting the thing to work will be a miracle. git can't find the public keys unless I use msysgit. Great. How exactly do I cd to D:\ ? If you are new to Git or SSH, the folks at GitHub have put up a tutorial explaining how to generate and set up a pair of SSH keys: http://help.github.com/msysgit-key-setup/. There is also a page describing solutions to some SSH setup problems: http://help.github.com/troubleshooting-ssh/. If you already have a private/public key and want to use it with Git, either copy them to Git's .ssh/ directory or edit the .ssh/config of the SSH instance used by Git accordingly. If you need to refer to »D:\somefile« inside the MSYS shell, use »/d/somefile«. I don't quite get what you mean with »git can't find the public keys unless I use msysgit«. Obviously, you need to modify the configuration of the SSH program Git uses, but other than that, you don't need to use the MSYS shell for setting up stuff – you can just use Windows Explorer and your favorite text editor for that as well. David Feb 02 2011 =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes: Andrej Mitrovic wrote: Bleh. I tried to use Git to update some of the doc files, but getting the thing to work will be a miracle. =20 git can't find the public keys unless I use msysgit. Great. How exactly do I cd to D:\ ? =20 So I try git-gui. Seems to work fine, I clone the forked repo and make a few changes. I try to commit, it says I have to update first. So I do that. *Error: crash crash crash*. I try to close the thing, it just keeps crashing. CTRL+ALT+DEL time.. =20 Okay, I try another GUI package, GitExtensions. I make new public/private keys and add it to github, I'm about to clone but then I get this "fatal: The remote end hung up unexpectedly". =20 I don't know what to say.. Why do you think I keep arguing against Git every chance I get? Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr Feb 02 2011 Brad Roberts <braddr puremagic.com> writes: On 2/1/2011 6:17 PM, Andrej Mitrovic wrote: Bleh. I tried to use Git to update some of the doc files, but getting the thing to work will be a miracle. git can't find the public keys unless I use msysgit. Great. How exactly do I cd to D:\ ? So I try git-gui. Seems to work fine, I clone the forked repo and make a few changes. I try to commit, it says I have to update first. So I do that. *Error: crash crash crash*. I try to close the thing, it just keeps crashing. CTRL+ALT+DEL time.. Okay, I try another GUI package, GitExtensions. I make new public/private keys and add it to github, I'm about to clone but then I get this "fatal: The remote end hung up unexpectedly". I don't know what to say.. I use cygwin for all my windows work (which I try to keep to a minimum). Works just fine in that environment. Feb 01 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 01/02/2011 23:07, Walter Bright wrote: Bruno Medeiros wrote: A more serious issue that I learned (or rather forgotten about before and remembered now) is the whole DVCSes keep the whole repository history locally aspect, which has important ramifications. If the repository is big, although disk space may not be much of an issue, I still find myself worrying about disk usage, despite being able to get a 2T drive these days for under a hundred bucks. Old patterns of thought die hard. Well, like I said, my concern about size is not so much disk space, but the time to make local copies of the repository, or cloning it from the internet (and the associated transfer times), both of which are not neglectable yet. My project at work could easily have gone to 1Gb of repo size if in the last year or so it has been stored on a DVCS! :S I hope this gets addressed at some point. But I fear that the main developers of both Git and Mercurial may be too "biased" to experience projects which are typically somewhat small in size, in terms of bytes (projects that consist almost entirely of source code). For example, in UI applications it would be common to store binary data (images, sounds, etc.) in the source control. The other case is what I mentioned before, wanting to store dependencies together with the project (in my case including the javadoc and source code of the dependencies - and there's very good reasons to want to do that). In this analysis: http://code.google.com/p/support/wiki/DVCSAnalysis they said that Git has some functionality to address this issue: "Client Storage Management. Both Mercurial and Git allow users to selectively pull branches from other repositories. This provides an upfront mechanism for narrowing the amount of history stored locally. In addition, Git allows previously pulled branches to be discarded. Git also allows old revision data to be pruned from the local repository (while still keeping recent revision data on those branches). With Mercurial, if a branch is in the local repository, then all of its revisions (back to the very initial commit) must also be present, and there is no way to prune branches other than by creating a new repository and selectively pulling branches into it. There has been some work addressing this in Mercurial, but nothing official yet." However I couldn't find more info about this, and other articles and comments about Git seem to omit or contradict this... :S Can Git really have an usable but incomplete local clone? -- Bruno Medeiros - Software Engineer Feb 04 2011 Michel Fortin <michel.fortin michelf.com> writes: On 2011-02-04 11:12:12 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: Can Git really have an usable but incomplete local clone? Yes, it's called a shallow clone. See the --depth switch of git clone: <http://www.kernel.org/pub/software/scm/git/docs/git-clone.html> -- Michel Fortin michel.fortin michelf.com http://michelf.com/ Feb 04 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 04/02/2011 20:11, Michel Fortin wrote: On 2011-02-04 11:12:12 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: Can Git really have an usable but incomplete local clone? Yes, it's called a shallow clone. See the --depth switch of git clone: <http://www.kernel.org/pub/software/scm/git/docs/git-clone.html> I was about to say "Cool!", but then I checked the doc on that link and it says: "A shallow repository has a number of limitations (you cannot clone or fetch from it, nor push from nor into it), but is adequate if you are only interested in the recent history of a large project with a long history, and would want to send in fixes as patches. " So it's actually not good for what I meant, since it is barely usable (you cannot push from it). :( -- Bruno Medeiros - Software Engineer Feb 09 2011 Michel Fortin <michel.fortin michelf.com> writes: On 2011-02-09 07:49:31 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: On 04/02/2011 20:11, Michel Fortin wrote: On 2011-02-04 11:12:12 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: Can Git really have an usable but incomplete local clone? Yes, it's called a shallow clone. See the --depth switch of git clone: <http://www.kernel.org/pub/software/scm/git/docs/git-clone.html> I was about to say "Cool!", but then I checked the doc on that link and it says: "A shallow repository has a number of limitations (you cannot clone or fetch from it, nor push from nor into it), but is adequate if you are only interested in the recent history of a large project with a long history, and would want to send in fixes as patches. " So it's actually not good for what I meant, since it is barely usable (you cannot push from it). :( Actually, pushing from a shallow repository can work, but if your history is not deep enough it will be a problem when git tries determine the common ancestor. Be sure to have enough depth so that your history contains the common ancestor of all the branches you might want to merge, and also make sure the remote repository won't rewrite history beyond that point and you should be safe. At least, that's what I understand from: <http://git.661346.n2.nabble.com/pushing-from-a-shallow-repo-allowed-td2332252.html> -- Michel Fortin michel.fortin michelf.com http://michelf.com/ Feb 09 2011 "nedbrek" <nedbrek yahoo.com> writes: Hello all, "Michel Fortin" <michel.fortin michelf.com> wrote in message news:iiu8dm$10te$1 digitalmars.com... On 2011-02-09 07:49:31 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: On 04/02/2011 20:11, Michel Fortin wrote: On 2011-02-04 11:12:12 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: Can Git really have an usable but incomplete local clone? Yes, it's called a shallow clone. See the --depth switch of git clone: <http://www.kernel.org/pub/software/scm/git/docs/git-clone.html> I was about to say "Cool!", but then I checked the doc on that link and it says: "A shallow repository has a number of limitations (you cannot clone or fetch from it, nor push from nor into it), but is adequate if you are only interested in the recent history of a large project with a long history, and would want to send in fixes as patches. " So it's actually not good for what I meant, since it is barely usable (you cannot push from it). :( Actually, pushing from a shallow repository can work, but if your history is not deep enough it will be a problem when git tries determine the common ancestor. Be sure to have enough depth so that your history contains the common ancestor of all the branches you might want to merge, and also make sure the remote repository won't rewrite history beyond that point and you should be safe. At least, that's what The other way to collaborate is to email someone a diff. Git has a lot of support for extracting diffs from emails and applying the patches. HTH, Ned Feb 10 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 09/02/2011 14:27, Michel Fortin wrote: On 2011-02-09 07:49:31 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: On 04/02/2011 20:11, Michel Fortin wrote: On 2011-02-04 11:12:12 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: Can Git really have an usable but incomplete local clone? Yes, it's called a shallow clone. See the --depth switch of git clone: <http://www.kernel.org/pub/software/scm/git/docs/git-clone.html> I was about to say "Cool!", but then I checked the doc on that link and it says: "A shallow repository has a number of limitations (you cannot clone or fetch from it, nor push from nor into it), but is adequate if you are only interested in the recent history of a large project with a long history, and would want to send in fixes as patches. " So it's actually not good for what I meant, since it is barely usable (you cannot push from it). :( Actually, pushing from a shallow repository can work, but if your history is not deep enough it will be a problem when git tries determine the common ancestor. Be sure to have enough depth so that your history contains the common ancestor of all the branches you might want to merge, and also make sure the remote repository won't rewrite history beyond that point and you should be safe. At least, that's what I understand from: <http://git.661346.n2.nabble.com/pushing-from-a-shallow-repo-allowed-td2332252.html> Interesting. But it still feels very much like a second-class functionality, not something they really have in mind to support well, at least not yet. Ideally, if one wants to do push but the ancestor history is incomplete, the VCS would download from the central repository whatever revision/changeset information was missing. Before someone says, oh but that defeats some of the purposes of a distributed VCS, like being able to work offline. I know, and I personally don't care that much, in fact I find this "benefit" of DVCS has been overvalued way out of proportion. Does anyone do any serious coding while being offline for an extended period of time? Some people mentioned coding on the move, with laptops, but seriously, even if I am connected to the Internet I cannot code with my laptop only, I need it connected to a monitor, as well as a mouse, (and preferably a keyboard as well). -- Bruno Medeiros - Software Engineer Feb 11 2011 Michel Fortin <michel.fortin michelf.com> writes: On 2011-02-11 08:05:27 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: On 09/02/2011 14:27, Michel Fortin wrote: On 2011-02-09 07:49:31 -0500, Bruno Medeiros <brunodomedeiros+spam com.gmail> said: I was about to say "Cool!", but then I checked the doc on that link and it says: "A shallow repository has a number of limitations (you cannot clone or fetch from it, nor push from nor into it), but is adequate if you are only interested in the recent history of a large project with a long history, and would want to send in fixes as patches. " So it's actually not good for what I meant, since it is barely usable (you cannot push from it). :( Actually, pushing from a shallow repository can work, but if your history is not deep enough it will be a problem when git tries determine the common ancestor. Be sure to have enough depth so that your history contains the common ancestor of all the branches you might want to merge, and also make sure the remote repository won't rewrite history beyond that point and you should be safe. At least, that's what I understand from: <http://git.661346.n2.nabble.com/pushing-from-a-shallow-repo-allowed-td2332252.html> Interesting. But it still feels very much like a second-class functionality, not something they really have in mind to support well, at least not yet. Ideally, if one wants to do push but the ancestor history is incomplete, the VCS would download from the central repository whatever revision/changeset information was missing. Actually, there's no "central" repository in Git. But I agree with your idea in general: one of the remotes could be designated as being a source to look for when encountering a missing object, probably the one from which you shallowly cloned from. All we need is someone to implement that. -- Michel Fortin michel.fortin michelf.com http://michelf.com/ Feb 11 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 11/02/2011 18:31, Michel Fortin wrote: Ideally, if one wants to do push but the ancestor history is incomplete, the VCS would download from the central repository whatever revision/changeset information was missing. Actually, there's no "central" repository in Git. That stuff about DVCS not having a central repository is another thing that is being said a lot, but is only true in a very shallow (and non-useful) way. Yes, in DVCS there are no more "working copies" as in Subversion, now everyone's working copy is a full fledged repository/clone that in technical terms is peer of any other repository. However, from an organizational point of view in a project, there is always going to be a "central" repository. The one that actually represents the product/application/library, where the builds and releases are made from. (Of course, there could be more than one central repository if there are multiple kinds of releases like stable/experimental, or forks of the the product, etc.) Maybe the DVCS world likes the term public/shared repository better, but that doesn't make much difference. -- Bruno Medeiros - Software Engineer Feb 16 2011 Russel Winder <russel russel.org.uk> writes: On Wed, 2011-02-16 at 14:51 +0000, Bruno Medeiros wrote: [ . . . ] That stuff about DVCS not having a central repository is another thing= =20 that is being said a lot, but is only true in a very shallow (and=20 non-useful) way. Yes, in DVCS there are no more "working copies" as in= =20 Subversion, now everyone's working copy is a full fledged=20 repository/clone that in technical terms is peer of any other repository. However, from an organizational point of view in a project, there is=20 always going to be a "central" repository. The one that actually=20 represents the product/application/library, where the builds and=20 releases are made from. (Of course, there could be more than one central= =20 repository if there are multiple kinds of releases like=20 stable/experimental, or forks of the the product, etc.) Definitely the case. There can only be one repository that represents the official state of a given project. That isn't really the issue in the move from CVCS systems to DVCS systems. Maybe the DVCS world likes the term public/shared repository better, but= =20 that doesn't make much difference. In the Bazaar community, and I think increasingly in Mercurial and Git ones, people talk of the "mainline" or "master".=20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Feb 16 2011 Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes: 2011/2/16 Russel Winder <russel russel.org.uk>: Definitely the case. =C2=A0There can only be one repository that represen= ts the official state of a given project. =C2=A0That isn't really the issue = in the move from CVCS systems to DVCS systems. Just note that not all projects have a specific "state" to represent. Many projects are centered around the concept of a centralized project, a "core"-team, and all-around central organisation and planning. Some projects however, I guess the Linux kernel is a prime example, have been quite de-centralized even in their nature for a long time. In the case of KDE, for a centralized example, there is a definite "project version", which is the version currently blessed by the central project team. There is a centralized project planning, including meetings, setting out goals for the coming development. In the case of Linux, it's FAR less obvious. Sure, most people see master torvalds/linux-2.6.git as THE Linux-version. However, there are many other trees interesting to track as well, such as the various distribution-trees which might incorporate many drivers not in mainline, especially for older stability-oriented kernels, RHEL or Debian is probably THE version to care about. You might also be interested in special-environment-kernels, such as non x86-kernels, in which case you're probably more interested in the central repo for that architecture, which is rarely Linuses. Also, IIRC, hard and soft realtime-enthusiasts neither looks at linuses tree first. Above all, in the Linux-kernel, there is not much of "centralised planning". Linus doesn't call to a big planning-meeting quarterly to set up specific milestones for the next kernel release, but in the beginning of each cycle, he is spammed with things already developed independently, scratching someones itch. He then cherry-picks the things that has got good reviews and are interesting for where he wants to go with the kernel. That is not to say that there aren't a lot of coordination and communication, but there isn't a clear centralized authority steering development in the same ways as in many other projects. The bottom line is, many projects, even ones using DVCS, are often centrally organized. However, the Linux kernel is clear evidence it is not the only project model that works. Feb 16 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 16/02/2011 17:54, Ulrik Mikaelsson wrote: 2011/2/16 Russel Winder<russel russel.org.uk>: Definitely the case. There can only be one repository that represents the official state of a given project. That isn't really the issue in the move from CVCS systems to DVCS systems. Just note that not all projects have a specific "state" to represent. Many projects are centered around the concept of a centralized project, a "core"-team, and all-around central organisation and planning. Some projects however, I guess the Linux kernel is a prime example, have been quite de-centralized even in their nature for a long time. In the case of KDE, for a centralized example, there is a definite "project version", which is the version currently blessed by the central project team. There is a centralized project planning, including meetings, setting out goals for the coming development. In the case of Linux, it's FAR less obvious. Sure, most people see master torvalds/linux-2.6.git as THE Linux-version. However, there are many other trees interesting to track as well, such as the various distribution-trees which might incorporate many drivers not in mainline, especially for older stability-oriented kernels, RHEL or Debian is probably THE version to care about. You might also be interested in special-environment-kernels, such as non x86-kernels, in which case you're probably more interested in the central repo for that architecture, which is rarely Linuses. Also, IIRC, hard and soft realtime-enthusiasts neither looks at linuses tree first. Above all, in the Linux-kernel, there is not much of "centralised planning". Linus doesn't call to a big planning-meeting quarterly to set up specific milestones for the next kernel release, but in the beginning of each cycle, he is spammed with things already developed independently, scratching someones itch. He then cherry-picks the things that has got good reviews and are interesting for where he wants to go with the kernel. That is not to say that there aren't a lot of coordination and communication, but there isn't a clear centralized authority steering development in the same ways as in many other projects. The bottom line is, many projects, even ones using DVCS, are often centrally organized. However, the Linux kernel is clear evidence it is not the only project model that works. Yeah, that's true. Some projects, the Linux kernel being one of the best examples, are more distributed in nature than not, in actual organizational terms. But projects like that are (and will remain) in the minority, a minority which is probably a very, very small. -- Bruno Medeiros - Software Engineer Feb 17 2011 Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes: 2011/2/17 Bruno Medeiros <brunodomedeiros+spam com.gmail>: Yeah, that's true. Some projects, the Linux kernel being one of the best examples, are more distributed in nature than not, in actual organizational terms. But projects like that are (and will remain) in the minority, a minority which is probably a very, very small. Indeed. However, I think it will be interesting to see how things develop, if this will be the case in the future too. The Linux kernel, and a few other projects were probably decentralized from start by necessity, filling very different purposes. However, new tools tends to affect models, which might make it a bit more common in the future. In any case, it's an interesting time to do software development. Feb 18 2011 Walter Bright <newshound2 digitalmars.com> writes: Bruno Medeiros wrote: but seriously, even if I am connected to the Internet I cannot code with my laptop only, I need it connected to a monitor, as well as a mouse, (and preferably a keyboard as well). I found I can't code on my laptop anymore; I am too used to and needful of a large screen. Feb 11 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 11/02/2011 23:30, Walter Bright wrote: Bruno Medeiros wrote: but seriously, even if I am connected to the Internet I cannot code with my laptop only, I need it connected to a monitor, as well as a mouse, (and preferably a keyboard as well). I found I can't code on my laptop anymore; I am too used to and needful of a large screen. Yeah, that was my point as well. The laptop monitor is too small for coding, (unless one has a huge laptop). -- Bruno Medeiros - Software Engineer Feb 16 2011 Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes: 2011/2/4 Bruno Medeiros <brunodomedeiros+spam com.gmail>: Well, like I said, my concern about size is not so much disk space, but the time to make local copies of the repository, or cloning it from the internet (and the associated transfer times), both of which are not neglectable yet. My project at work could easily have gone to 1Gb of repo size if in the last year or so it has been stored on a DVCS! :S I hope this gets addressed at some point. But I fear that the main developers of both Git and Mercurial may be too "biased" to experience projects which are typically somewhat small in size, in terms of bytes (projects that consist almost entirely of source code). For example, in UI applications it would be common to store binary data (images, sounds, etc.) in the source control. The other case is what I mentioned before, wanting to store dependencies together with the project (in my case including the javadoc and source code of the dependencies - and there's very good reasons to want to do that). I think the storage/bandwidth requirements of DVCS:s are very often exagerated, especially for text, but also somewhat for blobs. * For text-content, the compression of archives reduces them to, perhaps, 1/5 of their original size? - That means, that unless you completely rewrite a file 5 times during the course of a project, simple per-revision-compression of the file will turn out smaller, than the single uncompressed base-file that subversion transfers and stores. - The delta-compression applied ensures small changes does not count as a "rewrite". * For blobs, the archive-compression may not do as much, and they certainly pose a larger challenge for storing history, but: - AFAIU, at least git delta-compresses even binaries so even changes in them might be slightly reduced (dunno about the others) - I think more and more graphics are today are written in SVG? - I believe, for most projects, audio-files are usually not changed very often, once entered a project? Usually existing samples are simply copied in? * For both binaries and text, and for most projects, the latest revision is usually the largest. (Projects usually grow over time, they don't consistently shrink) I.E. older revisions are, compared to current, much much smaller, making the size of old history smaller compared to the size of current history. Finally, as a test, I tried checking out the last version of druntime from SVN and compare it to git (AFICT, history were preserved in the git-migration), the results were about what I expected. Checking out trunk from SVN, and the whole history from git: SVN: 7.06 seconds, 5,3 MB on disk Git: 2.88 seconds, 3.5 MB on disk Improvement Git/SVN: time reduced by 59%, space reduced by 34%. I did not measure bandwidth, but my guess is it is somewhere between the disk- and time- reductions. Also, if someone has an example of a recently converted repository including some blobs it would make an interesting experiment to repeat. Regards / Ulrik ----- ulrik ulrik ~/p/test> time svn co http://svn.dsource.org/projects/druntime/trunk druntime_svn ... 0.26user 0.21system 0:07.06elapsed 6%CPU (0avgtext+0avgdata 47808maxresident)k 544inputs+11736outputs (3major+3275minor)pagefaults 0swaps ulrik ulrik ~/p/test> du -sh druntime_svn 5,3M druntime_svn ulrik ulrik ~/p/test> time git clone git://github.com/D-Programming-Language/druntime.git druntime_git ... 0.26user 0.06system 0:02.88elapsed 11%CPU (0avgtext+0avgdata 14320maxresident)k 3704inputs+7168outputs (18major+1822minor)pagefaults 0swaps ulrik ulrik ~/p/test> du -sh druntime_git/ 3,5M druntime_git/ Feb 06 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 06/02/2011 14:17, Ulrik Mikaelsson wrote: 2011/2/4 Bruno Medeiros<brunodomedeiros+spam com.gmail>: Well, like I said, my concern about size is not so much disk space, but the time to make local copies of the repository, or cloning it from the internet (and the associated transfer times), both of which are not neglectable yet. My project at work could easily have gone to 1Gb of repo size if in the last year or so it has been stored on a DVCS! :S I hope this gets addressed at some point. But I fear that the main developers of both Git and Mercurial may be too "biased" to experience projects which are typically somewhat small in size, in terms of bytes (projects that consist almost entirely of source code). For example, in UI applications it would be common to store binary data (images, sounds, etc.) in the source control. The other case is what I mentioned before, wanting to store dependencies together with the project (in my case including the javadoc and source code of the dependencies - and there's very good reasons to want to do that). I think the storage/bandwidth requirements of DVCS:s are very often exagerated, especially for text, but also somewhat for blobs. * For text-content, the compression of archives reduces them to, perhaps, 1/5 of their original size? - That means, that unless you completely rewrite a file 5 times during the course of a project, simple per-revision-compression of the file will turn out smaller, than the single uncompressed base-file that subversion transfers and stores. - The delta-compression applied ensures small changes does not count as a "rewrite". * For blobs, the archive-compression may not do as much, and they certainly pose a larger challenge for storing history, but: - AFAIU, at least git delta-compresses even binaries so even changes in them might be slightly reduced (dunno about the others) - I think more and more graphics are today are written in SVG? - I believe, for most projects, audio-files are usually not changed very often, once entered a project? Usually existing samples are simply copied in? * For both binaries and text, and for most projects, the latest revision is usually the largest. (Projects usually grow over time, they don't consistently shrink) I.E. older revisions are, compared to current, much much smaller, making the size of old history smaller compared to the size of current history. Finally, as a test, I tried checking out the last version of druntime from SVN and compare it to git (AFICT, history were preserved in the git-migration), the results were about what I expected. Checking out trunk from SVN, and the whole history from git: SVN: 7.06 seconds, 5,3 MB on disk Git: 2.88 seconds, 3.5 MB on disk Improvement Git/SVN: time reduced by 59%, space reduced by 34%. I did not measure bandwidth, but my guess is it is somewhere between the disk- and time- reductions. Also, if someone has an example of a recently converted repository including some blobs it would make an interesting experiment to repeat. Regards / Ulrik ----- ulrik ulrik ~/p/test> time svn co http://svn.dsource.org/projects/druntime/trunk druntime_svn ... 0.26user 0.21system 0:07.06elapsed 6%CPU (0avgtext+0avgdata 47808maxresident)k 544inputs+11736outputs (3major+3275minor)pagefaults 0swaps ulrik ulrik ~/p/test> du -sh druntime_svn 5,3M druntime_svn ulrik ulrik ~/p/test> time git clone git://github.com/D-Programming-Language/druntime.git druntime_git ... 0.26user 0.06system 0:02.88elapsed 11%CPU (0avgtext+0avgdata 14320maxresident)k 3704inputs+7168outputs (18major+1822minor)pagefaults 0swaps ulrik ulrik ~/p/test> du -sh druntime_git/ 3,5M druntime_git/ Yes, Brad had posted some statistics of the size of the Git repositories for dmd, druntime, and phobos, and yes, they are pretty small. Projects which contains practically only source code, and little to no binary data are unlikely to grow much and repo size ever be a problem. But it might not be the case for other projects (also considering that binary data is usually already well compressed, like .zip, .jpg, .mp3, .ogg, etc., so VCS compression won't help much). It's unlikely you will see converted repositories with a lot of changing blob data. DVCS, at the least in the way they work currently, simply kill this workflow/organization-pattern. I very much suspect this issue will become more important as time goes on - a lot of people are still new to DVCS and they still don't realize the full implications of that architecture with regards to repo size. Any file you commit will add to the repository size *FOREVER*. I'm pretty sure we haven't heard the last word on the VCS battle, in that in a few years time people are *again* talking about and switching to another VCS :( . Mark these words. (The only way this is not going to happen is if Git or Mercurial are able to address this issue in a satisfactory way, which I'm not sure is possible or easy) -- Bruno Medeiros - Software Engineer Feb 09 2011 =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes: Bruno Medeiros wrote: Yes, Brad had posted some statistics of the size of the Git repositorie= s for dmd, druntime, and phobos, and yes, they are pretty small. Projects which contains practically only source code, and little to no binary data are unlikely to grow much and repo size ever be a problem. But it might not be the case for other projects (also considering that binary data is usually already well compressed, like .zip, .jpg, .mp3, .ogg, etc., so VCS compression won't help much). =20 It's unlikely you will see converted repositories with a lot of changin= g blob data. DVCS, at the least in the way they work currently, simply kill this workflow/organization-pattern. I very much suspect this issue will become more important as time goes on - a lot of people are still new to DVCS and they still don't realize= the full implications of that architecture with regards to repo size. Any file you commit will add to the repository size *FOREVER*. I'm pretty sure we haven't heard the last word on the VCS battle, in that i= n a few years time people are *again* talking about and switching to another VCS :( . Mark these words. (The only way this is not going to happen is if Git or Mercurial are able to address this issue in a satisfactory way, which I'm not sure is possible or easy) =20 There are several Mercurial extensions that attempt to address this issue. See for example: http://wiki.netbeans.org/HgExternalBinaries or http://mercurial.selenic.com/wiki/BigfilesExtension I do not know how well they perform in practice. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr Feb 09 2011 Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes: 2011/2/9 Bruno Medeiros <brunodomedeiros+spam com.gmail>: It's unlikely you will see converted repositories with a lot of changing blob data. DVCS, at the least in the way they work currently, simply kill this workflow/organization-pattern. I very much suspect this issue will become more important as time goes on - a lot of people are still new to DVCS and they still don't realize the full implications of that architecture with regards to repo size. Any file you commit will add to the repository size *FOREVER*. I'm pretty sure we haven't heard the last word on the VCS battle, in that in a few years time people are *again* talking about and switching to another VCS :( . Mark these words. (The only way this is not going to happen is if Git or Mercurial are able to address this issue in a satisfactory way, which I'm not sure is possible or easy) You don't happen to know about any projects of this kind in any other VCS that can be practically tested, do you? Besides, AFAIU this discussion was originally regarding to the D language components, I.E. DMD, druntime and Phobos. Not a lot of binaries here. Feb 09 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 09/02/2011 23:02, Ulrik Mikaelsson wrote: 2011/2/9 Bruno Medeiros<brunodomedeiros+spam com.gmail>: It's unlikely you will see converted repositories with a lot of changing blob data. DVCS, at the least in the way they work currently, simply kill this workflow/organization-pattern. I very much suspect this issue will become more important as time goes on - a lot of people are still new to DVCS and they still don't realize the full implications of that architecture with regards to repo size. Any file you commit will add to the repository size *FOREVER*. I'm pretty sure we haven't heard the last word on the VCS battle, in that in a few years time people are *again* talking about and switching to another VCS :( . Mark these words. (The only way this is not going to happen is if Git or Mercurial are able to address this issue in a satisfactory way, which I'm not sure is possible or easy) You don't happen to know about any projects of this kind in any other VCS that can be practically tested, do you? You mean a project like that, hosted in Subversion or CVS (so that you can convert it to Git/Mercurial and see how it is in terms of repo size)? I don't know any of the top of my head, except the one in my job, but naturally it is commercial and closed-source so I can't share it. I'm cloning the Mozilla Firefox repo right now, I'm curious how big it is. ( https://developer.mozilla.org/en/Mozilla_Source_Code_%28Mercurial%29) But other than that, what exactly do you want to test? There is no specific thing to test, if you add a binary file (from a format that is already compressed, like zip, jar, jpg, etc.) of size X, you will increase the repo size by X bytes forever. There is no other way around it. (Unless on Git you rewrite the history on the repo, which doubtfully will ever be allowed on central repositories) -- Bruno Medeiros - Software Engineer Feb 11 2011 Jean Crystof <a a.a> writes: Bruno Medeiros Wrote: On 09/02/2011 23:02, Ulrik Mikaelsson wrote: 2011/2/9 Bruno Medeiros<brunodomedeiros+spam com.gmail>: It's unlikely you will see converted repositories with a lot of changing blob data. DVCS, at the least in the way they work currently, simply kill this workflow/organization-pattern. I very much suspect this issue will become more important as time goes on - a lot of people are still new to DVCS and they still don't realize the full implications of that architecture with regards to repo size. Any file you commit will add to the repository size *FOREVER*. I'm pretty sure we haven't heard the last word on the VCS battle, in that in a few years time people are *again* talking about and switching to another VCS :( . Mark these words. (The only way this is not going to happen is if Git or Mercurial are able to address this issue in a satisfactory way, which I'm not sure is possible or easy) You don't happen to know about any projects of this kind in any other VCS that can be practically tested, do you? You mean a project like that, hosted in Subversion or CVS (so that you can convert it to Git/Mercurial and see how it is in terms of repo size)? I don't know any of the top of my head, except the one in my job, but naturally it is commercial and closed-source so I can't share it. I'm cloning the Mozilla Firefox repo right now, I'm curious how big it is. ( https://developer.mozilla.org/en/Mozilla_Source_Code_%28Mercurial%29) But other than that, what exactly do you want to test? There is no specific thing to test, if you add a binary file (from a format that is already compressed, like zip, jar, jpg, etc.) of size X, you will increase the repo size by X bytes forever. There is no other way around it. (Unless on Git you rewrite the history on the repo, which doubtfully will ever be allowed on central repositories) One thing we've done at work with game asset files is we put them in a separate repository and to conserve space we use a cleaned branch as a base for work repository. The "graph" below shows how it works initial state -> alpha1 -> alpha2 -> beta1 -> internal rev X -> internal rev X+1 -> internal rev X+2 -> ... -> internal rev X+n -> beta2 Now we have a new beta2. What happens next is we take a snapshot copy of the state of beta2, go back to beta1, create a new branch and "paste" the snapshot there. Now we move the old working branch with internal revisions to someplace safe and start using this as a base. And the work continues with this: initial state -> alpha1 -> alpha2 -> beta1 -> beta2 > internal rev X+n+1 -> ... The repository size won't become a problem with text / source code. Since you're a SVN advocate, please explain how well it works with 2500 GB of asset files? Feb 11 2011 Bruno Medeiros <brunodomedeiros+spam com.gmail> writes: On 11/02/2011 13:14, Jean Crystof wrote: Since you're a SVN advocate, please explain how well it works with 2500 GB of asset files? I'm not an SVN advocate. I have started using DVCSs over Subversion, and generally I agree they are better, but what I'm saying is that they are not all roses... it is not a complete win-win, there are a few important cons, like this one. -- Bruno Medeiros - Software Engineer Feb 16 2011 Ulrik Mikaelsson <ulrik.mikaelsson gmail.com> writes: 2011/2/11 Bruno Medeiros <brunodomedeiros+spam com.gmail>: On 09/02/2011 23:02, Ulrik Mikaelsson wrote: You don't happen to know about any projects of this kind in any other VCS that can be practically tested, do you? You mean a project like that, hosted in Subversion or CVS (so that you can convert it to Git/Mercurial and see how it is in terms of repo size)? I don't know any of the top of my head, except the one in my job, but naturally it is commercial and closed-source so I can't share it. I'm cloning the Mozilla Firefox repo right now, I'm curious how big it is. ( https://developer.mozilla.org/en/Mozilla_Source_Code_%28Mercurial%29) But other than that, what exactly do you want to test? There is no specific thing to test, if you add a binary file (from a format that is already compressed, like zip, jar, jpg, etc.) of size X, you will increase the repo size by X bytes forever. There is no other way around it. (Unless on Git you rewrite the history on the repo, which doubtfully will ever be allowed on central repositories) I want to test how much overhead the git-version _actually_ is, compared to the SVN-version. Even though the jpg are unlikely to be much more compressible with regular compression, with delta-compression and the fact of growing project-size it might still be interesting to see how much overhead we're talking, and what the performance over network is. Feb 12 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Thu, 06 Jan 2011 17:42:29 +0200, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote: What are the advantages of Mercurial over git? (git does allow multiple branches.) We've had a discussion in #d (IRC), and the general consensus there seems to be strongly in favor of Git/GitHub. For completeness (there's been a discussion before) here are my arguments: 1) Git has the largest user base - more people will be able to get started hacking on the source immediately. (GitHub compared to DSource below, some of these also apply to Gitorious, Bitbucket, Launchpad) 2) One-click forking - you can easily publish improvements that are easily discoverable to people interested in the project. (This practically guarantees that an open-source project will never hit a dead end, as long as some people are interested in it - both occasional patches and maintained forks are easily discoverable.) 3) UI for pull requests (requests to merge changes in a forked repository upstream), with comments. 4) Inline comments (you can comment on a specific line in a commit/patch). This integrates very nicely with 3) for great code review capabilities. 5) (Unique to GitHub) The network graph allows visualizing all commits in all forks of the project. 6) GitHub is run by a commercial company, and the same infrastructure is used for hosting commercial projects. Therefore, you can expect better uptime and support. GitHub has integrated wiki, issues and downloads (all optional). One thing GitHub doesn't have that DSource has is forums. I think there is no "shame" in leaving DSource for DigitalMars projects, many large open-source projects use GitHub (see GitHub's front page). Some existing D projects on GitHub: https://github.com/languages/D I think Jérôme's observations of Git performance are specific to Windows. Git is expected to be slower on Windows, since it runs on top of cygwin/msys. Here's a study on the Git wiki: https://git.wiki.kernel.org/index.php/GitBenchmarks Google has done a study of Git vs. Mercurial in 2008: http://code.google.com/p/support/wiki/DVCSAnalysis The main disadvantage they found in Git (poor performance over HTTP) doesn't apply to us, and I believe it was addressed in recent versions anyway. Disclaimer: I use Git, and avoid Mercurial if I can mainly because I don't want to learn another VCS. Nevertheless, I tried to be objective above. As I mentioned on IRC, I strongly believe this must be a fully-informed decision, since changing VCSes again is unrealistic once it's done. -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 06 2011 Travis Boucher <boucher.travis gmail.com> writes: On 01/06/11 17:55, Vladimir Panteleev wrote: Disclaimer: I use Git, and avoid Mercurial if I can mainly because I don't want to learn another VCS. Nevertheless, I tried to be objective above. As I mentioned on IRC, I strongly believe this must be a fully-informed decision, since changing VCSes again is unrealistic once it's done. Recently I have been using mercurial (bitbucket). I have used git previously, and subversion alot. The question I think is less of git vs. mercurial and more of (git|mercurial) vs. (subversion) and even more (github|bitbucket) vs. dsource. I like dsource alot, however it doesn't compare feature wise to github & bitbucket. The only argument feature wise is forums, and in reality we already have many places to offer/get support for D and D projects other than the dsource forums (newsgroups & irc for example). Another big issue I have with dsource is that its hard to find active projects and projects that have been dead (sometimes for 5+ years). The 'social coding' networks allow projects to be easily revived in the case they do die. Personally I don't care which is used (git|mercurial, github|bitbucket), as long as we find a better way of managing the code, and a nice way of doing experimental things and having a workflow to have those experimental things pulled into the official code bases. dsource has served us well, and could still be a useful tool (maybe have it index D stuff from (github|bitbucket)?), but its time to start using some of the other, better, tools out there. Jan 06 2011 Michel Fortin <michel.fortin michelf.com> writes: On 2011-01-06 19:55:10 -0500, "Vladimir Panteleev" <vladimir thecybershadow.net> said: 2) One-click forking - you can easily publish improvements that are easily discoverable to people interested in the project. (This practically guarantees that an open-source project will never hit a dead end, as long as some people are interested in it - both occasional patches and maintained forks are easily discoverable.) Easy forking is nice, but it could be a problem in our case. The license for the backend is not open-source enough for someone to republish it (in a separate own repo) without Walter's permission. -- Michel Fortin michel.fortin michelf.com http://michelf.com/ Jan 06 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Fri, 07 Jan 2011 03:17:50 +0200, Michel Fortin <michel.fortin michelf.com> wrote: Easy forking is nice, but it could be a problem in our case. The license for the backend is not open-source enough for someone to republish it (in a separate own repo) without Walter's permission. I suggested elsewhere in this thread that the two must be separated first. I think it must be done anyway when moving to a DVCS, regardless of the specific one or which hosting site we'd use. -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 06 2011 Travis Boucher <boucher.travis gmail.com> writes: On 01/06/11 18:30, Vladimir Panteleev wrote: On Fri, 07 Jan 2011 03:17:50 +0200, Michel Fortin <michel.fortin michelf.com> wrote: Easy forking is nice, but it could be a problem in our case. The license for the backend is not open-source enough for someone to republish it (in a separate own repo) without Walter's permission. I suggested elsewhere in this thread that the two must be separated first. I think it must be done anyway when moving to a DVCS, regardless of the specific one or which hosting site we'd use. I agree, separating out the proprietary stuff has other interesting possibilities such as a D front end written in D and integration with IDEs and analysis tools. Of course all of this is possible now, but it'd make merging front end updates so much nicer. Jan 06 2011 Michel Fortin <michel.fortin michelf.com> writes: On 2011-01-06 20:30:53 -0500, "Vladimir Panteleev" <vladimir thecybershadow.net> said: On Fri, 07 Jan 2011 03:17:50 +0200, Michel Fortin <michel.fortin michelf.com> wrote: Easy forking is nice, but it could be a problem in our case. The license for the backend is not open-source enough for someone to republish it (in a separate own repo) without Walter's permission. I suggested elsewhere in this thread that the two must be separated first. I think it must be done anyway when moving to a DVCS, regardless of the specific one or which hosting site we'd use. Which means that we need another solution for the backend, and if that solution isn't too worthless it could be used to host the other parts too and keep them together. That said, wherever the repositories are kept, nothing prevents them from being automatically mirrored on github (or anywhere else) by simply adding a post-update hook in the main repository. -- Michel Fortin michel.fortin michelf.com http://michelf.com/ Jan 06 2011 David Nadlinger <see klickverbot.at> writes: On 1/7/11 2:43 AM, Michel Fortin wrote: Which means that we need another solution for the backend, and if that solution isn't too worthless it could be used to host the other parts too and keep them together. Just to be sure: You did mean »together« as in »separate repositories on the same hosting platform«, right? I don't even think we necessarily can't use GitHub or the likes for the backend, we'd just need permission from Walter to redistribute the sources through that repository, right? It's been quite some time since I had a look at the backend license though… David Jan 06 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: I don't think git really needs MSYS? I mean I've just installed git again and it does have it's own executable runnable from the console. It seems to have a gui as well, runnable with "git gui". Pretty cool. And you can create an icon shotcut to the repo. Sweet. I'd vote for either the two, although I have to say I do like github a lot. I didn't know it supported wiki pages though, I haven't seen anyone use those on a project. Jan 06 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote: I don't think git really needs MSYS? I mean I've just installed git again and it does have it's own executable runnable from the console. MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though. -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 06 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/7/11, Vladimir Panteleev <vladimir thecybershadow.net> wrote: On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote: I don't think git really needs MSYS? I mean I've just installed git again and it does have it's own executable runnable from the console. MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though. Aye, but I didn't download that one, I got the one on the top here: https://code.google.com/p/msysgit/downloads/list?can=3 And if I put git.exe in it's own directory the only .dll it complains about is libiconv2.dll (well that, and some missing templates). Using these two alone seems to work fine. Jan 06 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Fri, 07 Jan 2011 04:31:35 +0200, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote: On 1/7/11, Vladimir Panteleev <vladimir thecybershadow.net> wrote: On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote: I don't think git really needs MSYS? I mean I've just installed git again and it does have it's own executable runnable from the console. MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though. Aye, but I didn't download that one, I got the one on the top here: https://code.google.com/p/msysgit/downloads/list?can=3 And if I put git.exe in it's own directory the only .dll it complains about is libiconv2.dll (well that, and some missing templates). Using these two alone seems to work fine. Ah, that's interesting! Must be a recent change. So they finally rewrote all the remaining bash/perl components to C? If so, that should give it a significant speed boost, most noticeable on Windows. -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 07 2011 Jacob Carlborg <doob me.com> writes: On 2011-01-07 03:31, Andrej Mitrovic wrote: On 1/7/11, Vladimir Panteleev<vladimir thecybershadow.net> wrote: On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote: I don't think git really needs MSYS? I mean I've just installed git again and it does have it's own executable runnable from the console. MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though. Aye, but I didn't download that one, I got the one on the top here: https://code.google.com/p/msysgit/downloads/list?can=3 And if I put git.exe in it's own directory the only .dll it complains about is libiconv2.dll (well that, and some missing templates). Using these two alone seems to work fine. Ever heard of TortoiseGit: http://code.google.com/p/tortoisegit/ -- /Jacob Carlborg Jan 08 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/8/11, Jacob Carlborg <doob me.com> wrote: Ever heard of TortoiseGit: http://code.google.com/p/tortoisegit/ I can't stand Turtoise projects. They install explorer shells and completely slow down the system whenever I'm browsing through the file system. "Turtoise" is a perfect name for it. Jan 08 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Sat, 08 Jan 2011 17:32:05 +0200, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote: On 1/8/11, Jacob Carlborg <doob me.com> wrote: Ever heard of TortoiseGit: http://code.google.com/p/tortoisegit/ I can't stand Turtoise projects. They install explorer shells and completely slow down the system whenever I'm browsing through the file system. "Turtoise" is a perfect name for it. Hmm, MSysGit comes with its own shell extension (GitCheetah), although it's just something to integrate the standard GUI tools (git gui / gitk) into the Explorer shell. It's optional, of course (installer option). -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 08 2011 "Nick Sabalausky" <a a.a> writes: "Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message news:mailman.493.1294500734.4748.digitalmars-d puremagic.com... On 1/8/11, Jacob Carlborg <doob me.com> wrote: Ever heard of TortoiseGit: http://code.google.com/p/tortoisegit/ I can't stand Turtoise projects. They install explorer shells and completely slow down the system whenever I'm browsing through the file system. "Turtoise" is a perfect name for it. You need to go into the "Icon Overlays" section of the settings and set up the "Exclude Paths" and "Include Paths" (Exclude everything, ex "C:\*", and then include whatever path or paths you keep all your projects in.) Once I did that (on TortoiseSVN) the speed was perfectly fine, even though my system was nothing more than an old single-core Celeron 1.7 GHz with 1GB RAM (it's back up to 2GB now though :)). Jan 08 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/8/11, Nick Sabalausky <a a.a> wrote: You need to go into the "Icon Overlays" section of the settings and set up the "Exclude Paths" and "Include Paths" (Exclude everything, ex "C:\*", and then include whatever path or paths you keep all your projects in.) Ok thanks, I might give it another try. Jan 08 2011 "Nick Sabalausky" <a a.a> writes: "Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message news:op.vow11fqdtuzx1w cybershadow.mshome.net... On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote: I don't think git really needs MSYS? I mean I've just installed git again and it does have it's own executable runnable from the console. MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though. That might not be too bad then if it's all packaged well. The main problem with MSYS/MinGW is just getting the damn thing downloaded, installed and running properly. Do you need to actually use the MSYS/MinGW command-line, or is that all hidden away and totally behind-the-scenes? Jan 06 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Nick Sabalausky Wrote: "Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message news:op.vow11fqdtuzx1w cybershadow.mshome.net... On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote: I don't think git really needs MSYS? I mean I've just installed git again and it does have it's own executable runnable from the console. MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though. That might not be too bad then if it's all packaged well. The main problem with MSYS/MinGW is just getting the damn thing downloaded, installed and running properly. Do you need to actually use the MSYS/MinGW command-line, or is that all hidden away and totally behind-the-scenes? I am able to run git commands from powershell. I ran a single install program that made it all happen for me. You can run what it calls "git bash" to open mingw. Jan 06 2011 "Nick Sabalausky" <a a.a> writes: "Jesse Phillips" <jessekphillips+D gmail.com> wrote in message news:ig61ni$frh$1 digitalmars.com... Nick Sabalausky Wrote: "Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message news:op.vow11fqdtuzx1w cybershadow.mshome.net... On Fri, 07 Jan 2011 04:09:04 +0200, Andrej Mitrovic <andrej.mitrovich gmail.com> wrote: I don't think git really needs MSYS? I mean I've just installed git again and it does have it's own executable runnable from the console. MSysGit comes with its own copy of MSys. It's pretty transparent to the user, though. That might not be too bad then if it's all packaged well. The main problem with MSYS/MinGW is just getting the damn thing downloaded, installed and running properly. Do you need to actually use the MSYS/MinGW command-line, or is that all hidden away and totally behind-the-scenes? I am able to run git commands from powershell. I ran a single install program that made it all happen for me. You can run what it calls "git bash" to open mingw. I just tried the msysgit installer that Andrej linked to. I didn't try to use or create any repository, but everthing seems to work great so far. Painless installer, Git GUI launches fine, "git" works from my ordinary windows command line, and I never had to touch MSYS directly in any way. Nice! Jan 07 2011 "Nick Sabalausky" <a a.a> writes: "Vladimir Panteleev" <vladimir thecybershadow.net> wrote in message news:op.vowx58gqtuzx1w cybershadow.mshome.net... Git is expected to be slower on Windows, since it runs on top of cygwin/msys. I'd consider running under MSYS to be a *major* disadvantage. MSYS is barely usable garbage (and cygwin is just plain worthless). Jan 06 2011 "Vladimir Panteleev" <vladimir thecybershadow.net> writes: On Fri, 07 Jan 2011 03:30:16 +0200, Nick Sabalausky <a a.a> wrote: I'd consider running under MSYS to be a *major* disadvantage. MSYS is barely usable garbage (and cygwin is just plain worthless). Why? MSysGit works great here! I have absolutely no issues with it. It doesn't pollute PATH, either, because by default only one directory with git/gitk is added to PATH. MSysGit can even integrate with PuTTYLink and use your PuTTY SSH sessions (but you can of course also use MSys' OpenSSH). Git GUI and Gitk even run better on Windows in my experience (something weird about Tcl/Tk on Ubuntu). -- Best regards, Vladimir mailto:vladimir thecybershadow.net Jan 06 2011 Russel Winder <russel russel.org.uk> writes: On Fri, 2011-01-07 at 02:55 +0200, Vladimir Panteleev wrote: [ . . . ] We've had a discussion in #d (IRC), and the general consensus there seems= =20 to be strongly in favor of Git/GitHub. For completeness (there's been a = =20 discussion before) here are my arguments: If the active D contributors are mostly in favour of Git then go for it. Personally I would go with Mercurial but a shift to DVCS is way, way more important than which DVCS! 1) Git has the largest user base - more people will be able to get starte= d =20 hacking on the source immediately. As with all statistics, you can prove nigh on any statement. I doubt Git actually has the largest user base, but it does have the zeitgeist. O'Reilly declared Git the winner in the DVCS race three years ago, and all the Linux, Ruby on Rails, etc,. hype is about Git. On the other hand Sun/Oracle, Python, etc., etc. went with Mercurial. Mercurial and Bazaar are a smoother transition from Subversion, which may or may not be an issue. (GitHub compared to DSource below, some of these also apply to Gitorious,= =20 Bitbucket, Launchpad) 2) One-click forking - you can easily publish improvements that are easil= y =20 discoverable to people interested in the project. (This practically =20 guarantees that an open-source project will never hit a dead end, as long= =20 as some people are interested in it - both occasional patches and =20 maintained forks are easily discoverable.) I think this is just irrelevant hype. The real issue is not how easy it is to fork a repository, the issue is how easy is it to create changesets, submit changesets for review, merge changesets into Trunk. I guess the question is not about repositories, it is about review tools: Gerrit, Rietveld, etc. (Jokes about Guido's choice of the name Rietveld should be considered pass=C3=A9, if not part of the furniture :-) (cf. http://en.wikipedia.org/wiki/Gerrit_Rietveld) 3) UI for pull requests (requests to merge changes in a forked repository= =20 upstream), with comments. Launchpad certainly supports this as, I think BitBucket does. It is an important issue. 4) Inline comments (you can comment on a specific line in a commit/patch)= . =20 This integrates very nicely with 3) for great code review capabilities. Better still use a changeset review processing tool rather than just a workflow? 5) (Unique to GitHub) The network graph allows visualizing all commits in= =20 all forks of the project. Do the Linux folk use this? I doubt it, once you get to a very large number of forks, it will become useless. A fun tool but only for medium size projects. I guess the question is whether D will become huge or stay small? 6) GitHub is run by a commercial company, and the same infrastructure is = =20 used for hosting commercial projects. Therefore, you can expect better = =20 uptime and support. Launchpad and BitBucket are run by commercial companies. GitHub has integrated wiki, issues and downloads (all optional). One thin= g =20 GitHub doesn't have that DSource has is forums. Launcpad and BitBucket have all the same. I think there is no "shame" in leaving DSource for DigitalMars projects, = =20 many large open-source projects use GitHub (see GitHub's front page). Everyone complains about DSource so either change it or move from it. Some existing D projects on GitHub: https://github.com/languages/D =20 I think J=C3=A9r=C3=B4me's observations of Git performance are specific t= o Windows. =20 Git is expected to be slower on Windows, since it runs on top of =20 cygwin/msys. Here's a study on the Git wiki: =20 https://git.wiki.kernel.org/index.php/GitBenchmarks =20 Google has done a study of Git vs. Mercurial in 2008: http://code.google.com/p/support/wiki/DVCSAnalysis The main disadvantage they found in Git (poor performance over HTTP) =20 doesn't apply to us, and I believe it was addressed in recent versions = =20 anyway. =20 Disclaimer: I use Git, and avoid Mercurial if I can mainly because I don'= t =20 want to learn another VCS. Nevertheless, I tried to be objective above. As I mentioned on IRC, I strongly believe this must be a fully-informed = =20 decision, since changing VCSes again is unrealistic once it's done. I have to disagree that your presentation was objective, but let us leave it aside so as to avoid flame wars or becoming uncivil. In the end there is a technical choice to be made between Git and Mercurial on the one side and Bazaar on the other since the repository/branch model is so different. If the choice is between Git and Mercurial, then it is really down to personally prejudice, tribalism, etc. If the majority of people who are genuinely active in creating changesets want to go with Git, then do it. Having interminable debates on Git vs. Mercurial is the real enemy. NB This is a decision that should be made by the people *genuinely* active in creating code changes -- people like me who are really just D users really do not count in this election. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 07 2011 Don <nospam nospam.com> writes: Andrei Alexandrescu wrote: On 1/6/11 9:18 AM, Don wrote: Walter Bright wrote: Nick Sabalausky wrote: "Caligo" <iteronvexor gmail.com> wrote in message news:mailman.451.1294306555.4748.digitalmars-d puremagic.com... On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright <newshound2 digitalmars.com>wrote: That's pretty much what I'm afraid of, losing my grip on how the whole thing works if there are multiple dmd committers. Perhaps using a modern SCM like Git might help? Everyone could have (and should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch. I don't, either. There's no difference if you're only making one patch, but once you make more, there's a significant difference. I can generally manage to fix about five bugs at once, before they start to interfere with each other. After that, I have to wait for some of the bugs to be integrated into the trunk, or else start discarding changes from my working copy. Occasionally I also use my own DMD local repository, but it doesn't work very well (gets out of sync with the trunk too easily, because SVN isn't really set up for that development model). I think that we should probably move to Mercurial eventually. I think there's potential for two benefits: (1) quicker for you to merge changes in; (2) increased collaboration between patchers. But due to the pain in changing the developement model, I don't think it's a change we should make in the near term. What are the advantages of Mercurial over git? (git does allow multiple branches.) Andrei Essentially political and practical rather than technical. Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition). Technically, I don't think there's much difference between git and Mercurical, compared to how different they are from svn. Jan 06 2011 "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes: On Fri, 07 Jan 2011 08:53:06 +0100, Don wrote: Andrei Alexandrescu wrote: What are the advantages of Mercurial over git? (git does allow multiple branches.) Andrei Essentially political and practical rather than technical. Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition). I don't think Git's SVN hostility is a problem in practice. AFAIK there are tools (git-svn comes to mind) that can transfer the contents of an SVN repository, with full commit history and all, to a Git repo. Also, it will only have to be done once, so that shouldn't weigh too heavily on the decision. Technically, I don't think there's much difference between git and Mercurical, compared to how different they are from svn. Then my vote goes to Git, simply because that's what I'm familiar with. -Lars Jan 07 2011 Jonathan M Davis <jmdavisProg gmx.com> writes: On Friday 07 January 2011 03:33:48 Lars T. Kyllingstad wrote: On Fri, 07 Jan 2011 08:53:06 +0100, Don wrote: Andrei Alexandrescu wrote: What are the advantages of Mercurial over git? (git does allow multiple branches.) Andrei Essentially political and practical rather than technical. Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition). I don't think Git's SVN hostility is a problem in practice. AFAIK there are tools (git-svn comes to mind) that can transfer the contents of an SVN repository, with full commit history and all, to a Git repo. Also, it will only have to be done once, so that shouldn't weigh too heavily on the decision. Technically, I don't think there's much difference between git and Mercurical, compared to how different they are from svn. Then my vote goes to Git, simply because that's what I'm familiar with. -Lars Well, you get the full commit history if you use git-svn to commit to an svn repository. I'm not sure it deals with svn branches very well though, since svn treats those as separate files, and so each branch is actually a separate set of files, and I don't believe that git will consider them to be the same. However, since I always just use git-svn on the trunk of whatever svn repository I'm dealing with, I'm not all that experienced with dealing with how svn branches look in a git repository's history. And it may be that there's a way to specifically import an svn repository in a manner which makes all of those branches look as a single set of files to git. I don't know. But on the whole, converting from subversion to git is pretty easy. We technically use svn at work, but I always just use git-svn. Life is much more pleasant that way. - Jonathan M Davis Jan 07 2011 "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes: On Fri, 07 Jan 2011 03:42:33 -0800, Jonathan M Davis wrote: On Friday 07 January 2011 03:33:48 Lars T. Kyllingstad wrote: On Fri, 07 Jan 2011 08:53:06 +0100, Don wrote: Andrei Alexandrescu wrote: What are the advantages of Mercurial over git? (git does allow multiple branches.) Andrei Essentially political and practical rather than technical. Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition). I don't think Git's SVN hostility is a problem in practice. AFAIK there are tools (git-svn comes to mind) that can transfer the contents of an SVN repository, with full commit history and all, to a Git repo. Also, it will only have to be done once, so that shouldn't weigh too heavily on the decision. Technically, I don't think there's much difference between git and Mercurical, compared to how different they are from svn. Then my vote goes to Git, simply because that's what I'm familiar with. -Lars Well, you get the full commit history if you use git-svn to commit to an svn repository. I'm not sure it deals with svn branches very well though, [...] Here's a page that deals with importing an SVN repo in git: http://help.github.com/svn-importing/ Actually, based on that page, it seems Github can automatically take care of the whole transfer for us, if we decide to set up there. -Lars Jan 07 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Jonathan M Davis Wrote: Well, you get the full commit history if you use git-svn to commit to an svn repository. I'm not sure it deals with svn branches very well though, since svn treats those as separate files, and so each branch is actually a separate set of files, and I don't believe that git will consider them to be the same. However, since I always just use git-svn on the trunk of whatever svn repository I'm dealing with, I'm not all that experienced with dealing with how svn branches look in a git repository's history. And it may be that there's a way to specifically import an svn repository in a manner which makes all of those branches look as a single set of files to git. I don't know. But on the whole, converting from subversion to git is pretty easy. We technically use svn at work, but I always just use git-svn. Life is much more pleasant that way. - Jonathan M Davis You can have git-svn import the standard svn layout. This will than import the tags and branches. And best I can tell, the reason it takes so long to do this is because it is analyzing each branch to see where it occurred, and then making the proper branches as it would be in Git. You can specify your own layout if your branches aren't set up like a standard svn. Jan 07 2011 Walter Bright <newshound2 digitalmars.com> writes: Don wrote: Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition). I've been using git on a couple small projects, and I find that I have to transfer the files to Linux in order to check them in to git. Jan 07 2011 "Nick Sabalausky" <a a.a> writes: "Walter Bright" <newshound2 digitalmars.com> wrote in message news:ig7mee$1r10\$1 digitalmars.com... Don wrote: Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition). I've been using git on a couple small projects, and I find that I have to transfer the files to Linux in order to check them in to git. When I installed msysgit I got Git entires added to explorer's right-click menu. Do those not work? Jan 07 2011 On Fri, 07 Jan 2011 20:33:42 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Don wrote: Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition). I've been using git on a couple small projects, and I find that I have to transfer the files to Linux in order to check them in to git. Could you please elaborate? A lot of people are using Git on Windows without any problems. -- Best regards, Jan 07 2011 Walter Bright <newshound2 digitalmars.com> writes: Vladimir Panteleev wrote: On Fri, 07 Jan 2011 20:33:42 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Don wrote: Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition). I've been using git on a couple small projects, and I find that I have to transfer the files to Linux in order to check them in to git. Could you please elaborate? A lot of people are using Git on Windows without any problems. No download for Windows from the git site. Jan 07 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Walter Bright Wrote: On Fri, 07 Jan 2011 20:33:42 +0200, Walter Bright <newshound2 digitalmars.com> wrote: Don wrote: Mercurial doesn't have the blatant hostility to Windows that is evident in git. It also doesn't have the blatant hostility to svn (in fact, it tries hard to ease the transition). I've been using git on a couple small projects, and I find that I have to transfer the files to Linux in order to check them in to git. Could you please elaborate? A lot of people are using Git on Windows without any problems. No download for Windows from the git site. Direct: Website: Jan 07 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: On 1/7/11, Walter Bright <newshound2 digitalmars.com> wrote: No download for Windows from the git site. There's a big Windows icon on the right: http://git-scm.com/ Jan 07 2011 David Nadlinger <see klickverbot.at> writes: On 1/7/11 10:21 PM, Walter Bright wrote: No download for Windows from the git site. Are you deliberately trying to make yourself look ignorant? Guess what's right at the top of http://git-scm.com/… David Jan 07 2011 David Nadlinger <see klickverbot.at> writes: On 1/7/11 10:31 PM, David Nadlinger wrote: Are you deliberately trying to make yourself look ignorant? Guess what's right at the top of http://git-scm.com/… I just realized that this might have sounded a bit too harsh, there was no offense intended. I am just somewhat annoyed by the frequency easy-to-research facts are misquoted at this newsgroup right now, as well as how this could influence the way D and the D community are perceived as a whole. David Jan 07 2011 bearophile <bearophileHUGS lycos.com> writes: David Nadlinger: I just realized that this might have sounded a bit too harsh, there was no offense intended. Being gentle and not offensive is Just Necessary [TM] in a newsgroup like this. On the other hand Walter is a pretty adult person so I think he's not offended. Bye, bearophile Jan 07 2011 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes: On 1/7/11 5:49 PM, bearophile wrote: I just realized that this might have sounded a bit too harsh, there was no offense intended. Being gentle and not offensive is Just Necessary [TM] in a newsgroup like this. On the other hand Walter is a pretty adult person so I think he's not offended. Bye, bearophile Well he is adult all right. Pretty? Maybt not that much :o). Andrei Jan 07 2011 Andrej Mitrovic <andrej.mitrovich gmail.com> writes: I don't recall Walter ever loosing his cool, which is quite an achievement on this NG. Jan 07 2011 Walter Bright <newshound2 digitalmars.com> writes: Andrei Alexandrescu wrote: Well he is adult all right. Pretty? Maybt not that much :o). Hawt? Perhaps! Jan 07 2011 Walter Bright <newshound2 digitalmars.com> writes: David Nadlinger wrote: On 1/7/11 10:21 PM, Walter Bright wrote: No download for Windows from the git site. Are you deliberately trying to make yourself look ignorant? Guess what's right at the top of http://git-scm.com/… So it is. The last time I looked, it wasn't there. Jan 07 2011 David Nadlinger <see klickverbot.at> writes: On 1/7/11 8:53 AM, Don wrote: What are the advantages of Mercurial over git? (git does allow multiple branches.) Andrei Essentially political and practical rather than technical. […] By the way, I just stumbled upon this page presenting arguments in favor of Git, which seems about as objective to me as it will probably get: http://whygitisbetterthanx.com/ Obviously, this site is biased in the sense that it doesn't mention possible arguments against Git – do you know of any similar collections for other DVCS? David Jan 08 2011 =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes: David Nadlinger wrote: On 1/7/11 8:53 AM, Don wrote: What are the advantages of Mercurial over git? (git does allow multiple branches.) Andrei Essentially political and practical rather than technical. [=E2=80=A6= ] =20 By the way, I just stumbled upon this page presenting arguments in favo= r of Git, which seems about as objective to me as it will probably get: http://whygitisbetterthanx.com/ =20 Obviously, this site is biased in the sense that it doesn't mention possible arguments against Git =E2=80=93 do you know of any similar col= lections for other DVCS? =20 * Cheap local branching Available in Mercurial with the LocalbranchExtension. * Git is fast Probably true on Linux, not so on Windows. The speed is acceptable for most operations, but it is slower than Mercurial. * Staging area Could actually be seen as a drawback since it adds extra complexity. Depending on your workflow, most of the use cases can be handled more easily in Mercurial with the crecord extension. * GitHub Bitbucket. * Easy to learn Mouahahahahahahah! The other points are true, but they are also applicable to any DVCS. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr Jan 09 2011 Gour <gour atmarama.net> writes: On Thu, 06 Jan 2011 09:42:29 -0600 "Andrei" =3D=3D Andrei Alexandrescu wrote: Andrei> What are the advantages of Mercurial over git? (git does allow Andrei> multiple branches.) It's not as established as Git/Mercurial, but I like a a lot...coming from Sqlite main developer - Fossil (http://fossil-scm.org). Simple command set, very powerful, using sqlite3 back-end for storage, integrated wiki, distributed bug tracker, extra lite for hosting...it'sp ossible to import/export from/to Git's fast-import/export (see http://fossil-scm.org/index.html/doc/trunk/www/inout.wiki). Sincerely, Gour --=20 Gour | Hlapicina, Croatia | GPG key: CDBF17CA ---------------------------------------------------------------- Jan 07 2011 Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes: Nick Sabalausky wrote: "Caligo" <iteronvexor gmail.com> wrote in message news:mailman.451.1294306555.4748.digitalmars-d puremagic.com... On Thu, Jan 6, 2011 at 12:28 AM, Walter Bright <newshound2 digitalmars.com>wrote: That's pretty much what I'm afraid of, losing my grip on how the whole thing works if there are multiple dmd committers. Perhaps using a modern SCM like Git might help? Everyone could have (and should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch. There isn't because it is basically the same workflow. The reason why people would prefer git style fork and merge over sending svn patches is because these tools do the same job much better. github increases the usability further and give you nice pr for free. otoh I understand that it's not exactly attractive to invest time to replace something that also works right now. Jan 06 2011 Russel Winder <russel russel.org.uk> writes: On Thu, 2011-01-06 at 03:35 -0600, Caligo wrote: [ . . . ] =20 Perhaps using a modern SCM like Git might help? Everyone could have (and should have) commit rights, and they would send pull requests. You or one of the managers would then review the changes and pull and merge with the main branch. It works great; just checkout out Rubinius on Github to see what I mean: https://github.com/evanphx/rubinius Whilst I concur (massively) that Subversion is no longer the correct tool for collaborative working, especially on FOSS projects, but also for proprietary ones, I am not sure Git is the best choice of tool. Whilst Git appears to have the zeitgeist, Mercurial and Bazaar are actually much easier to work with. Where Git has GitHub, Mercurial has BitBucket, and Bazaar has Launchpad. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 06 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Russel Winder Wrote: Whilst I concur (massively) that Subversion is no longer the correct tool for collaborative working, especially on FOSS projects, but also for proprietary ones, I am not sure Git is the best choice of tool. Whilst Git appears to have the zeitgeist, Mercurial and Bazaar are actually much easier to work with. Where Git has GitHub, Mercurial has BitBucket, and Bazaar has Launchpad. First I think one must be convinced to move. Then that using a social site adds even more. Then we can discuss which one to use. My personal choice is git because I don't use the others. And this was a great http://progit.org/book/ Jan 06 2011 Jacob Carlborg <doob me.com> writes: On 2011-01-06 07:28, Walter Bright wrote: Nick Sabalausky wrote: Automatically accepting all submissions immediately into the main line with no review isn't a good thing either. In that article he's complaining about MS, but MS is notorious for ignoring all non-MS input, period. D's already light-years ahead of that. Since D's purely volunteer effort, and with a lot of things to be done, sometimes things *are* going to tale a while to get in. But there's just no way around that without major risks to quality. And yea Walter could grant main-line DMD commit access to others, but then we'd be left with a situation where no single lead dev understands the whole program inside and out - and when that happens to projects, that's inevitably the point where it starts to go downhill. That's pretty much what I'm afraid of, losing my grip on how the whole thing works if there are multiple dmd committers. That is very understandable. Maybe we can have a look at the linux kernel development process: http://ldn.linuxfoundation.org/book/how-participate-linux-community As how I understands it, Linus Torvalds day to day work on the linux kerenl mostly consist of merging changes made in developer branches into the main branch. On the bright (!) side, Brad Roberts has gotten the test suite in shape so that anyone developing a patch can run it through the full test suite, which is a prerequisite to getting it folded in. Has this been announced (somewhere else than the DMD mailing list)? Where can one get the test suite? It should be available and easy to find and with instructions how to run it. Somewhere on the Digitalmars site or/and perhaps released with the DMD source code? In the last release, most of the patches in the changelog were done by people other than myself, although yes, I vet and double check them all before committing them. -- /Jacob Carlborg Jan 06 2011 Walter Bright <newshound2 digitalmars.com> writes: Jacob Carlborg wrote: Has this been announced (somewhere else than the DMD mailing list)? Where can one get the test suite? It should be available and easy to find and with instructions how to run it. Somewhere on the Digitalmars site or/and perhaps released with the DMD source code? It's part of dmd on svn: http://www.dsource.org/projects/dmd/browser/trunk/test Jan 06 2011 Caligo <iteronvexor gmail.com> writes: On Thu, Jan 6, 2011 at 5:50 AM, Russel Winder <russel russel.org.uk> wrote: Whilst I concur (massively) that Subversion is no longer the correct tool for collaborative working, especially on FOSS projects, but also for proprietary ones, I am not sure Git is the best choice of tool. Whilst Git appears to have the zeitgeist, Mercurial and Bazaar are actually much easier to work with. Where Git has GitHub, Mercurial has BitBucket, and Bazaar has Launchpad. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net <sip%3Arussel.winder ekiga.net> 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder BitBucket has copied almost everything from Github, and I don't understand how they've never been sued. http://dev.pocoo.org/~blackbird/github-vs-bitbucket/bitbucket.html Github team has been the innovator here, and they never stop improving the site with new features and bug fixes. It would be nice to support their work by using Github. There is also Gitorious. It only offers free hosting and it is more team orientated than Github, but Github has recently added the "Organization" feature. The interesting thing about Gitorious is that you can run it on your own server. I don't think you can do that with Github. One cool thing about Github that I like is gist: https://gist.github.com/ It's a pastebin, but it uses Git and supports D syntax. People are always sharing snippets on these newsgroups, and it would have been nice if they were gists. I've never used Bazaar, so no comment on that. But, between Git and Mercurial, I vote for Git. Jan 06 2011 Daniel Gibson <metalcaedes gmail.com> writes: Am 07.01.2011 03:20, schrieb Caligo: On Thu, Jan 6, 2011 at 5:50 AM, Russel Winder <russel russel.org.uk <mailto:russel russel.org.uk>> wrote: Whilst I concur (massively) that Subversion is no longer the correct tool for collaborative working, especially on FOSS projects, but also for proprietary ones, I am not sure Git is the best choice of tool. Whilst Git appears to have the zeitgeist, Mercurial and Bazaar are actually much easier to work with. Where Git has GitHub, Mercurial has BitBucket, and Bazaar has Launchpad. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net <mailto:sip%3Arussel.winder ekiga.net> 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk <mailto:russel russel.org.uk> London SW11 1EN, UK w: www.russel.org.uk <http://www.russel.org.uk> skype: russel_winder BitBucket has copied almost everything from Github, and I don't understand how they've never been sued. http://dev.pocoo.org/~blackbird/github-vs-bitbucket/bitbucket.html Yeah, see also: http://schacon.github.com/bitbucket.html by the same author When this rant was new I read a page that listed where Github stole their ideas and designs (Sourceforce for example), but I can't find it anymore. This rant was bullshit, as even the author seems to have accepted. I don't understand why people still mirror and link this crap. Jan 06 2011 Caligo <iteronvexor gmail.com> writes: On Thu, Jan 6, 2011 at 8:47 PM, Daniel Gibson <metalcaedes gmail.com> wrote: Yeah, see also: http://schacon.github.com/bitbucket.html by the same author When this rant was new I read a page that listed where Github stole their ideas and designs (Sourceforce for example), but I can't find it anymore. This rant was bullshit, as even the author seems to have accepted. I don't understand why people still mirror and link this crap. hmmm...Interesting! I did not know that, and thanks for the share. There is even a discussion about it on reddit where the author apologizes. I don't understand why he would do such a thing. Jan 06 2011 "Nick Sabalausky" <a a.a> writes: "Caligo" <iteronvexor gmail.com> wrote in message news:mailman.461.1294366839.4748.digitalmars-d puremagic.com... BitBucket has copied almost everything from Github, and I don't understand how they've never been sued. http://dev.pocoo.org/~blackbird/github-vs-bitbucket/bitbucket.html That page looks like the VCS equivalent of taking pictures of sandwiches from two different restaurants and then bitching "Oh my god! What a blatant copy! Look, they both have meat, lettuce and condiments between slices of bread! And they BOTH have the lettuce on top of the meat! What a pathetic case of plagiarism!" Bah. Jan 06 2011 Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes: Nick Sabalausky wrote: "Caligo" <iteronvexor gmail.com> wrote in message news:mailman.461.1294366839.4748.digitalmars-d puremagic.com... BitBucket has copied almost everything from Github, and I don't understand how they've never been sued. http://dev.pocoo.org/~blackbird/github-vs-bitbucket/bitbucket.html That page looks like the VCS equivalent of taking pictures of sandwiches from two different restaurants and then bitching "Oh my god! What a blatant copy! Look, they both have meat, lettuce and condiments between slices of bread! And they BOTH have the lettuce on top of the meat! What a pathetic case of plagiarism!" Bah. Really? When I first visited bitbucket, I though this was from the makers of github launching a hg site from their github code, with some slightly altered css. There is quite a difference between github, gitorious and launchpad on the other hand. Jan 07 2011 Lutger Blijdestijn <lutger.blijdestijn gmail.com> writes: Lutger Blijdestijn wrote: Nick Sabalausky wrote: "Caligo" <iteronvexor gmail.com> wrote in message news:mailman.461.1294366839.4748.digitalmars-d puremagic.com... BitBucket has copied almost everything from Github, and I don't understand how they've never been sued. http://dev.pocoo.org/~blackbird/github-vs-bitbucket/bitbucket.html That page looks like the VCS equivalent of taking pictures of sandwiches from two different restaurants and then bitching "Oh my god! What a blatant copy! Look, they both have meat, lettuce and condiments between slices of bread! And they BOTH have the lettuce on top of the meat! What a pathetic case of plagiarism!" Bah. Really? When I first visited bitbucket, I though this was from the makers of github launching a hg site from their github code, with some slightly altered css. There is quite a difference between github, gitorious and launchpad on the other hand. To be clear: not that I care much, good ideas should be copied (or, from your perspective, bad ideas could ;) ) Jan 07 2011 Russel Winder <russel russel.org.uk> writes: On Thu, 2011-01-06 at 20:20 -0600, Caligo wrote: < . . . ignoring all the plagiarism rubbish which has been dealt with by others . . . > There is also Gitorious. It only offers free hosting and it is more team orientated than Github, but Github has recently added the "Organization" feature. The interesting thing about Gitorious is that you can run it on your own server. I don't think you can do that with Github. I have never used Gitorious (though I do have an account). My experience is limited to GitHub, BitBucket, GoogleCode, and Launchpad. The crucial difference between GitHub and BitBucket on the one hand and Launchpad on the other is that Launchpad supports teams as well as individuals. GoogleCode enforces teams and doesn't support individuals at all so doesn't really count. Where SourceForge and all the other sit these days is I guess a moot point.=20 One cool thing about Github that I like is gist: https://gist.github.com/ =20 It's a pastebin, but it uses Git and supports D syntax. People are always sharing snippets on these newsgroups, and it would have been nice if they were gists. =20 Personally I have never used these things, nor found a reason to do so. I've never used Bazaar, so no comment on that. But, between Git and Mercurial, I vote for Git. Mercurial and Git are very similar in so many ways, though there are some crucial differences (the index in Git being the most obvious, but for me the most important is remote tracking branches). Bazaar has a completely different core model. Sadly, fashion and tribalism tend to play far too important a role in all discussions of DVCS -- I note no-one has mentioned Darcs or Monotone yet! And recourse to argument about number of projects using a given DVCS are fatuous. What matters is the support for VCS in the tool chain and the workflow. It is undoubtedly the case that Git and Mercurial currently have the most support across the board, though Canonical are trying very hard to make Bazaar a strong player -- sadly they are focusing too much on Ubuntu and not enough on Windows to stay in the game for software developers, no support for Visual Studio. Anecdotal experience seems to indicate that Mercurial has a more average-developer-friendly use model -- though there are some awkward corners. Despite a huge improvement to Git over the last 3 years, it still lags Mercurial on this front. However, worrying about the average developer is more important for companies and proprietary work than it is for FOSS projects -- where the skill set appears to be better than average. All in all it is up to the project lead to make a choice and for everyone else to live with it. I would advise Walter to shift to one of Mercurial or Git, but if he wants to stick with Subversion -- and suffer the tragic inability to sanely work with branches -- that is his choice. As any Git/Mercurial/Bazaar user knows, Git, Mercurial and Bazaar can all be used as Subversion clients. However without creating a proper bridge these clients cannot be used in a DVCS peer group because of the rebasing that is enforced -- at least by Git and Mercurial, Bazaar has a mode of working that avoids the rebasing and so the Subversion repository appears as a peer in the DVCS peer group. Perhaps the interesting models to consider are GoogleCode that chose to support Mercurial and Subversion, and Codehaus that chose to support Git and Subversion (using Gitosis). Of course DSource already support all three. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.n= et 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder Jan 07 2011 Jacob Carlborg <doob me.com> writes: On 2011-01-05 22:39, bearophile wrote: Jacob Carlborg: And sometimes Mac OS X is *slightly* ahead of the other OSes, Tango has had support for dynamic libraries on Mac OS X using DMD for quite a while now. For D2 a patch is just sitting there in bugzilla waiting for the last part of it to be commited. I'm really pushing this because people seem to forget this. A quotation from here: http://whatupdave.com/post/1170718843/leaving-net Also stop using codeplex it’s not real open source! Real open source isn’t submitting a patch and waiting/hoping that one day it might be accepted and merged into the main line.< Bye, bearophile So what are you saying here? That I should fork druntime and apply the patches myself? I already have too many projects to handle, I probably can't handle yet another one. -- /Jacob Carlborg Jan 06 2011 bearophile <bearophileHUGS lycos.com> writes: Jacob Carlborg: So what are you saying here? That I should fork druntime and apply the patches myself? I already have too many projects to handle, I probably can't handle yet another one. See my more recent post for some answer. I think changing how DMD source code is managed (allowing people to create branches, etc) is not going to increase your work load. On the other hand it's going to make D more open source for people that like this and have some free time. Bye, bearophile Jan 06 2011 Walter Bright <newshound2 digitalmars.com> writes: bearophile wrote: How does D square up, performance-wise, to C and C++ ? Has anyone got any benchmark figures? DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 bit version will use 8 or more SSE registers), and sometimes it's slower for integer programs too. The benchmarks you posted where it was supposedly slower in integer math turned out to be mistaken. I've seen DMD programs slow down if you nest two foreach inside each other. There is a collection of different slow microbenchmarks. But LDC1 is able to run D1 code that looks like C about equally fast as C or sometimes a bit faster. DMD2 uses thread local memory on default that in theory slows code down a bit if you use global data, but I have never seen a benchmark that shows this slowdown clearly (an there is __gshared too, but sometimes it seems a placebo). If you use higher level constructs your program will often go slower. Rubbish. The higher level constructs are "lowered" into the equivalent low level constructs. Often one of the most important things for speed is memory management, D encourages to heap allocate a lot (class instances are usually on the heap), and this is very bad for performance, That is not necessarily true. Using the gc can often result in higher performance than explicit allocation, for various subtle reasons. And saying it is "very bad" is just wrong. also because the built-in GC doesn't have an Eden generation managed as a stack. So if you want more performance you must program like in Pascal/Ada, stack-allocating a lot, or using memory pools, etc. It's a lot a matter of self-discipline while you program. This is quite wrong. Jan 05 2011 "Steven Schveighoffer" <schveiguy yahoo.com> writes: On Wed, 05 Jan 2011 14:53:16 -0500, Walter Bright <newshound2 digitalmars.com> wrote: bearophile wrote: Often one of the most important things for speed is memory management, D encourages to heap allocate a lot (class instances are usually on the heap), and this is very bad for performance, That is not necessarily true. Using the gc can often result in higher performance than explicit allocation, for various subtle reasons. And saying it is "very bad" is just wrong. In practice, it turns out D's GC is pretty bad performance-wise. Avoiding using the heap (or using the C heap) whenever possible usually results in a vast speedup. This is not to say that the GC concept is to blame, I think we just have a GC that is not the best out there. It truly depends on the situation. In something like a user app where the majority of the time is spent sleeping waiting for events, the GC most likely does very well. I expect the situation to get better when someone has time to pay attention to increasing GC performance. -Steve Jan 05 2011 bearophile <bearophileHUGS lycos.com> writes: For people interested in do-it-yourself regarding benchmarking D, there are some synthetic ones here: http://is.gd/kbiQM Many others on request. Bye, bearophile Jan 05 2011 Long Chang <changedalone gmail.com> writes: 2011/1/6 Walter Bright <newshound2 digitalmars.com> bearophile wrote: How does D square up, performance-wise, to C and C++ ? Has anyone got any benchmark figures? DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 bit version will use 8 or more SSE registers), and sometimes it's slower for integer programs too. The benchmarks you posted where it was supposedly slower in integer math turned out to be mistaken. I've seen DMD programs slow down if you nest two foreach inside each other. There is a collection of different slow microbenchmarks. But LDC1 is able to run D1 code that looks like C about equally fast as C or sometimes a bit faster. DMD2 uses thread local memory on default that in theory slows code down a bit if you use global data, but I have never seen a benchmark that shows this slowdown clearly (an there is __gshared too, but sometimes it seems a placebo). If you use higher level constructs your program will often go slower. Rubbish. The higher level constructs are "lowered" into the equivalent low level constructs. Often one of the most important things for speed is memory management, D encourages to heap allocate a lot (class instances are usually on the heap), and this is very bad for performance, That is not necessarily true. Using the gc can often result in higher performance than explicit allocation, for various subtle reasons. And saying it is "very bad" is just wrong. also because the built-in GC doesn't have an Eden generation managed as a stack. So if you want more performance you must program like in Pascal/Ada, stack-allocating a lot, or using memory pools, etc. It's a lot a matter of self-discipline while you program. This is quite wrong. I using D for 3 years . I am not in newsgroup because my English is very pool . D is excellent , I try it with Libevent, Libev, pcre, sqlite, c-ares, dwt, and a lot other amazing Lib. It work great with C-lib . I enjoy it so much . My work is a web developer, I also try use D in web field , It not result well . Adam D. Ruppe post some interesting cod in here , And I find a lot people try in web field. for example: (mango, https://github.com/temiy/daedalus, Sendero ... ) , But in the end I had to say, most D project is dying . D like a beautiful girl friends, You play with her can have a lot of fun. But she is be scared to make promisee , you can't count your life on it. she is not a good potential marriage . her life is still in mess, and day after day she is more smart but not become more mature. so if you want do some serious work , You'd better choose another language. if you just wan fun , D is a good companion . Jan 05 2011 "Nick Sabalausky" <a a.a> writes: "Long Chang" <changedalone gmail.com> wrote in message news:mailman.445.1294291595.4748.digitalmars-d puremagic.com... 2011/1/6 Walter Bright <newshound2 digitalmars.com> bearophile wrote: How does D square up, performance-wise, to C and C++ ? Has anyone got any benchmark figures? DMD has an old back-end, it doesn't use SSE (or AVX) registers yet (64 bit version will use 8 or more SSE registers), and sometimes it's slower for integer programs too. The benchmarks you posted where it was supposedly slower in integer math turned out to be mistaken. I've seen DMD programs slow down if you nest two foreach inside each other. There is a collection of different slow microbenchmarks. But LDC1 is able to run D1 code that looks like C about equally fast as C or sometimes a bit faster. DMD2 uses thread local memory on default that in theory slows code down a bit if you use global data, but I have never seen a benchmark that shows this slowdown clearly (an there is __gshared too, but sometimes it seems a placebo). If you use higher level constructs your program will often go slower. Rubbish. The higher level constructs are "lowered" into the equivalent low level constructs. Often one of the most important things for speed is memory management, D encourages to heap allocate a lot (class instances are usually on the heap), and this is very bad for performance, That is not necessarily true. Using the gc can often result in higher performance than explicit allocation, for various subtle reasons. And saying it is "very bad" is just wrong. also because the built-in GC doesn't have an Eden generation managed as a stack. So if you want more performance you must program like in Pascal/Ada, stack-allocating a lot, or using memory pools, etc. It's a lot a matter of self-discipline while you program. This is quite wrong. I using D for 3 years . I am not in newsgroup because my English is very pool . D is excellent , I try it with Libevent, Libev, pcre, sqlite, c-ares, dwt, and a lot other amazing Lib. It work great with C-lib . I enjoy it so much . My work is a web developer, I also try use D in web field , It not result well . Adam D. Ruppe post some interesting cod in here , And I find a lot people try in web field. for example: (mango, https://github.com/temiy/daedalus, Sendero ... ) , But in the end I had to say, most D project is dying . D like a beautiful girl friends, You play with her can have a lot of fun. But she is be scared to make promisee , you can't count your life on it. she is not a good potential marriage . her life is still in mess, and day after day she is more smart but not become more mature. so if you want do some serious work , You'd better choose another language. if you just wan fun , D is a good companion . I'd say D is more like an above-average teen. Sure, they're young and naturally may still fuck up now and then, but they're operating on a strong foundation and just need a little more training. Jan 05 2011 Jesse Phillips <jessekphillips+D gmail.com> writes: Walter Bright Wrote: I'm not sure I see how that's any different from everyone having "create and submit a patch" rights, and then having Walter or one of the managers review the changes and merge/patch with the main branch. I don't, either. I actually founds some D repositories at github, not really up-to-date: https://github.com/d-lang Don't know who d-lang is, but they probably should have added some code. And it would be better if Walter was managing it... There are many benefits to the coder for using a distributed CMS. And you can use git with SVN, but may run into other issues as pointed out by Don. Now, if you add github or another social repository site, what you have is the ability for anyone to public display their patches, merge in other's patches, or demonstrate new features (tail const objects) which has a visible connection to the main branch. Then on top of that patches are submitted as a pull request: http://help.github.com/pull-requests/ Which provides review of the changes, public visibility into the current requests against the main branch. The benefit to Walter or even the patch writer would not be great, but it provides a lot of visibility to the observer. And using this model still allows Walter control over every patch that comes into the main branch. But it will make it 20x easier for those that want to build their own to roll in all available patches. (aren't numbers with no data to back them great). But the simplicity of branching for a distributed CMS definitely makes using them much nicer than SVN. Jan 06 2011
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23607498407363892, "perplexity": 8626.724228053827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123484.45/warc/CC-MAIN-20170423031203-00200-ip-10-145-167-34.ec2.internal.warc.gz"}
http://gateoverflow.in/103993/test-book-test
44 views An Undirected graph G with only one simple path between each pair of vertices has two vertices of degree 4, one vertex of degree 3 and two vertices of degree 2. Number of vertices of degree 1 are _____________ ? In the question, it is given that: An Undirected graph G with only one simple path between each pair of vertices This implies that the graph is a tree. Now, • 2 vertices have degree 4. • 1 vertex has degree 3. • 2 vertices have degree 2. Let number of vertices of degree 1 be x. According to Handshaking Lemma, $\sum_{v\epsilon V}^{}$deg(v) = 2|E| So, (2*4) + (1*3) + (2*2) + (x*1) = 2[(2+1+2+x)-1] {The graph is a tree, so for 'n' vertices, we have 'n-1' edges} Solving, we get x=7. So, 7 vertices have degree 1.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7245733737945557, "perplexity": 463.5177889881827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110774.86/warc/CC-MAIN-20170822123737-20170822143737-00689.warc.gz"}
https://bitbucket.org/aorta/decompose/commits/c63e3d2956f73bc680a1a080a53e3050
# Commits committed c63e3d2 Draft Merge Merging with Skeel • Participants • Parent commits 20d7eae, 9c50732 # File biblio/linalgdecompose.bib File contents unchanged. # File decompose-eigenvaluesrelated.tex the corresponding diagonal elements of $T$, $\lambda_i = S_{ii}/T_{ii}$, are the generalised eigenvalues that solve the generalised eigenvalue problem $Av=\lambda Bv$, where $\lambda$ is an unknown scalar and $v$ is an unknown -nonzero vector\cite{hornjohnson1990matrix,golubvanloan1996matrix}. +nonzero vector\cite{hornjohnson1986matrixanalysis,golubvanloan1996matrix}. \subsubsection{Real QZ Decomposition} Real version of QZ Decomposition: $A=QSZ^T$ and $B=QTZ^T$ where $A$, $B$, $Q$,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9960217475891113, "perplexity": 6009.517278132374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375091751.85/warc/CC-MAIN-20150627031811-00097-ip-10-179-60-89.ec2.internal.warc.gz"}
http://turbomachinery.asmedigitalcollection.asme.org/article.aspx?articleid=1467212
0 TECHNICAL PAPERS # Heat Transfer and Aerodynamics of Turbine Blade Tips in a Linear Cascade [+] Author and Article Information P. J. Newton, G. D. Lock Department of Mechanical Engineering, University of Bath, Bath, UK S. K. Krishnababu, H. P. Hodson, W. N. Dawes Department of Engineering, University of Cambridge, Cambridge, UK J. Hannis Siemens Industrial Turbomachinery Ltd., Lincoln, UK C. Whitney Alstom Power Technology Centre, Leicester, UK J. Turbomach. 128(2), 300-309 (Mar 01, 2004) (10 pages) doi:10.1115/1.2137745 History: Received October 01, 2003; Revised March 01, 2004 ## Abstract Local measurements of the heat transfer coefficient and pressure coefficient were conducted on the tip and near tip region of a generic turbine blade in a five-blade linear cascade. Two tip clearance gaps were used: 1.6% and 2.8% chord. Data was obtained at a Reynolds number of $2.3×105$ based on exit velocity and chord. Three different tip geometries were investigated: A flat (plain) tip, a suction-side squealer, and a cavity squealer. The experiments reveal that the flow through the plain gap is dominated by flow separation at the pressure-side edge and that the highest levels of heat transfer are located where the flow reattaches on the tip surface. High heat transfer is also measured at locations where the tip-leakage vortex has impinged onto the suction surface of the aerofoil. The experiments are supported by flow visualization computed using the CFX CFD code which has provided insight into the fluid dynamics within the gap. The suction-side and cavity squealers are shown to reduce the heat transfer in the gap but high levels of heat transfer are associated with locations of impingement, identified using the flow visualization and aerodynamic data. Film cooling is introduced on the plain tip at locations near the pressure-side edge within the separated region and a net heat flux reduction analysis is used to quantify the performance of the successful cooling design. <> ## Figures Figure 1 Low speed cascade modified for heat transfer measurements Figure 2 Figure 3 The mesh heater Figure 4 Cp Contours as measured on the blade and tip Figure 5 Cp Contours as measured on the casing Figure 6 Cross-chord measurements of Cp and h at x∕C=50%H∕C=1.6% Figure 7 CFX flow visualization Figure 8 Heat transfer coefficients for plain tip geometry: Top H∕C 1.6%, bottom H∕C=2.8% Figure 9 Heat transfer coefficient for plain (top pair), squealer (middle) and cavity (bottom) geometries (W∕m2K) Figure 10 Uncooled tip NHFR for H∕C=1.6% Figure 11 (a)–(c). Cooled Tip h, η and NHFR Figure 12 CFD predictions of h(W∕m2K) for plain tip, H∕C=1.6% ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Proceedings Articles Related eBook Content Topic Collections
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.285907506942749, "perplexity": 8335.966645852328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447554191.27/warc/CC-MAIN-20141224185914-00094-ip-10-231-17-201.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Carbonate_hardness
# Carbonate hardness Carbonate hardness, or carbonate alkalinity is a measure of the alkalinity of water caused by the presence of carbonate (CO2− 3 ) and bicarbonate (HCO 3 ) anions. Carbonate hardness is usually expressed either as parts per million (ppm or mg/L), or in degree KH (dKH) (from the German "Karbonathärte"). One degree KH is equal to 17.848 mg/l (ppm) CaCO 3 , e.g. one degree KH corresponds to the carbonate and bicarbonate ions found in a solution of approximately 17.848 milligrams of calcium carbonate (CaCO 3 ) per litre of water (17.848 ppm). Both measurements (mg/L or KH) are usually expressed as mg/L CaCO 3 – meaning the concentration of carbonate expressed as if calcium carbonate were the sole source of carbonate ions. Carbonate and bicarbonate anions contribute to alkalinity due to their basic nature, hence their ability to neutralize acid. Mathematically, the carbonate anion concentration is counted twice due to its ability to neutralize two protons, while bicarbonate is counted once as it can neutralize one proton. Therefore, bicarbonates that are present in the water are converted to an equivalent concentration of carbonates when determining KH. For example: An aqueous solution containing 120 mg NaHCO3 (baking soda) per litre of water will contain 1.4285 mmol/L of bicarbonate, since the molar mass of baking soda is 84.007 g/mol. This is equivalent in carbonate hardness to a solution containing 0.71423 mmol/L of (calcium) carbonate, or 71.485 mg/L of calcium carbonate (molar mass 100.09 g/mol). Since one degree KH = 17.848 mg/L CaCO3, this solution has a KH of 4.0052 degrees. $\text{CT (mEq/L)} = [\text{HCO}_3^-] + 2*[\text{CO}_3^{2-}]$ For water with a pH below 8.5, the CO32- will be less than 1% of the HCO3-. In a solution where only CO2 affects the pH, carbonate hardness can be used to calculate the concentration of dissolved CO2 in the solution with the formula CO2 = 3 * KH * 10(7-pH), where KH is degrees of carbonate hardness and CO2 is given in ppm. The term carbonate hardness is also sometimes used as a synonym for temporary hardness, in which case it refers to that portion of hard water that can be removed by processes such as boiling or lime softening, and then separation of water from the resulting precipitate.[1]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.858101487159729, "perplexity": 4315.867048252162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997859240.8/warc/CC-MAIN-20140722025739-00237-ip-10-33-131-23.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/4197-plz-help.html
1. ## plz help 1>Is sqrt(x^2)=x an identity (true for all values of x)? 2> For the equation x-sqrt(x)=0 , perform the following: a) Solve for all values of x that satisfies the equation. b) Graph the functions and on the same graph (by plotting points if necessary). Show the points of intersection of these two graphs. c) How does the graph relate to part a? 2. Originally Posted by bobby77 1>Is sqrt(x^2)=x an identity (true for all values of x)? No, it isn't true if x is less than 0 3. Originally Posted by bobby77 1>Is sqrt(x^2)=x an identity (true for all values of x)? the square root is per definition a positive number or zero. x can be a negative number. So the answer is NO. $\displaystyle \sqrt{(-3)^2}\neq (-3)$ Originally Posted by bobby77 2> For the equation x-sqrt(x)=0 , perform the following: a) Solve for all values of x that satisfies the equation. factorize the lhs of this equation: $\displaystyle x-\sqrt{(x)}=0 \Longrightarrow \sqrt{x}(\sqrt{x}-1)=0$ Thus: $\displaystyle \sqrt{x}=0\ \vee\ \sqrt{x}=1$. Therefore x = 0 or x = 1. Originally Posted by bobby77 b) Graph the functions and on the same graph (by plotting points if necessary). Show the points of intersection of these two graphs. c) How does the graph relate to part a? I am not certain what you mean by functions (plr!). So I made a diagram of g(x)=x and w(x)=sqrt(x). the x-values of the intercepts are the solutions of the equation. Bye EB Attached Thumbnails 4. Originally Posted by bobby77 1>Is sqrt(x^2)=x an identity (true for all values of x)? Let me add that $\displaystyle \sqrt{x^2}=|x|$ Prove: If $\displaystyle x\geq 0$ then $\displaystyle \sqrt{x^2}=x$ because $\displaystyle x^2=x^2$, and $\displaystyle |x|=x$ so that is true. If $\displaystyle x<0$ then $\displaystyle \sqrt{x^2}=-x$ because $\displaystyle (-x)^2=x^2$ and $\displaystyle -x\geq 0$, and $\displaystyle |x|=-x$ so that is true. Thus, $\displaystyle \sqrt{x^2}=|x|$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034671187400818, "perplexity": 636.8697958596338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936833.6/warc/CC-MAIN-20180419091546-20180419111546-00787.warc.gz"}
https://ediss.uni-goettingen.de/handle/11858/00-1735-0000-002E-E5A6-5
# Investigation of the Structure and Dynamics of Multiferroic Systems by Inelastic Neutron Scattering and Complementary Methods ## Files in this item The following license files are associated with this item:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8837174773216248, "perplexity": 3194.962952546148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488286726.71/warc/CC-MAIN-20210621151134-20210621181134-00270.warc.gz"}
https://math.stackexchange.com/questions/1015092/show-that-a-setminus-b-cup-a-setminus-c-b-leftrightarrow-a-b-wedge-b
# Show that $(A\setminus B) \cup (A\setminus C) = B \Leftrightarrow A=B \wedge (B \cap C) = \emptyset$ [duplicate] I believe there are 3 parts to this. 1) $(A\setminus B) \cup (A\setminus C) = B \Rightarrow A=B$ 2) $(A\setminus B) \cup (A\setminus C) = B \Rightarrow (B \cap C) = \emptyset$ 3) $A=B \wedge (B \cap C) = \emptyset \Rightarrow (A\setminus B) \cup (A\setminus C) = B$ I can do the the parts labelled 1 and 3 but cannot show part 2. Anyone who can explain how you do the 2nd part ie show $(A\setminus B) \cup (A\setminus C) = B \Rightarrow (B \cap C) = \emptyset$ ? ## marked as duplicate by Marnix Klooster, drhab, user99914, happymath, user91500Dec 1 '15 at 9:48 • @amWhy Happens to the best of us as we can see ;) Deleting in a second... – AlexR Nov 10 '14 at 14:43 • LOL i made so much errors :P thanks for the edits – Namch Nov 10 '14 at 15:14 Hint: Let $(A-B)\cup (A-C)=B$ hold true and assume that there is $x \in B \cap C$. This means in particular that $x\in B$ and $x \in C$. Hence $$x \notin A-B$$ and for the same reason $$x\notin A-C$$ Therefore $$x\notin (A-B)\cup (A-C)$$ but on the other hand $x\in B$ which a contradiction to the assumption that $$(A-B)\cup (A-C)=B$$ • Nicely done lol I would vote up but need reputation of 15 :P – Namch Nov 10 '14 at 14:54 • Haha, no problem don't worry. You are welcome – Jimmy R. Nov 10 '14 at 14:56 You have that $(A/B) \cup (A/C)=B$. Then clearly, $B \subseteq A/C$ since it couldn't be in $A/B$ because you just removed all of $B$. So then if $B \subseteq A/C$, that is intuitively, saying, "none of $B$ is in $C$." i.e. $B \cap C = \emptyset$ Since you already showed (1), we can use it to ease the proof of (2): $$(A\setminus B) \cup (A\setminus C) \stackrel{(1)}= (B\setminus B) \cup (B\setminus C) = B \setminus C \stackrel{\text{req.}}= B$$ Now $B\setminus C = B \cap C^C$ by definition, so we have $B \cap C^C = B \Rightarrow B \subset C^C \Rightarrow B \cap C = \emptyset$ as claimed. An systematic solution: In the table below, every column represents some subexpression and holds truth values meaning "belongs to this set". The rows exhaust all $2^3$ combinations. The $7^{th}$ column is the LHS of the equivalence and the $11^{th}$ is the RHS. As you can see, they are identical. A B C A/B A/C + =B A=B B.C =0 and 0 0 0 0 0 0 1 1 0 1 1 1 0 0 1 1 1 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 1 1 0 0 1 1 1 1 0 1 1 0 0 1 0 0 0 1 1 0 1 1 1 0 1 1 0 1 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 0 1 1 1 0 0 0 0 1 1 0 0 (+ for $\cup$, . for $\cap$) Grouping all rows in a single hexadecimal number: A B C A/B A/C + =B A=B B.C =0 and 55 33 0F 44 50 54 98 99 03 FC 98 (/ is and not, + is or, . is and, = is not xor, bitwise) • Nice one but in my opinion it would take too long if this was an exam question. – Namch Nov 10 '14 at 15:00 • For an expression involving $n$ variables and $k$ operators, you will compute $2^n k$ bits, in an automated way. This is to be compared to the symbolic approach. Using a binary or hexadecimal calculator, takes less than a minute ($k$ operations). – Yves Daoust Nov 10 '14 at 15:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7761764526367188, "perplexity": 309.42251202358176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313996.39/warc/CC-MAIN-20190818185421-20190818211421-00236.warc.gz"}
http://xavieranguera.com/phdthesis/node112.html
# Speech/Non-Speech Detection Block Experiments for the speech/non-speech module were obtained for the SDM case to make it directly comparable with the baseline system results shown in the previous section. Although in this case two slightly different development and test sets were used. The development set consisted on the RT02 + RT04s datasets (16 meeting excerpts) and the test set was the RT05s set (with exception of the NIST meeting with faulty transcriptions). Forced alignments were used to evaluate the DER, MISS and FA errors. In the development of the proposed hybrid speech/non-speech detector there are three main parameters that need to be set. These are the minimum duration for the speech/non-speech segments in both the energy block and the models block, and the complexity of the models in the models block. The development set was used to first estimate the minimum duration of the speech and non-speech segments in the energy-based detector. In figure 6.1 one can see the MISS and FA scores for various durations (in # frames). While for a final speech/non-speech system one would choose the value that gives the minimum total error, in this case the goal is to obtain enough non-speech data to train the non-speech models in the second step. It is very important to choose the value with smaller MISS so that the non-speech model is as pure as possible. This is so because the speech model is usually assigned more Gaussian mixtures in the modeling step, therefore a bigger FA rate does not influence it as much. It can be observed how in the range between duration 1000 and 8000 the MISS rate remains quite flat, which indicates how robust the system is to variations in the data. In any new dataset, if it does not contain a minimum value for the MISS rate at the same value are in the development set, it will most probably still be a very plausible solution. A duration = 2400 (150ms duration) is chosen with MISS = 0.3% and FA=9.5% (total 9.7%). The same procedure is followed to select the minimum duration for the speech and non-speech segments decoded using the model-based decoder, using the minimum duration determined by the previous analysis of the energy-based detector. In figure 6.2 one can see the FA and MISS error rates for different minimum segment sizes (the same for speech and non speech); such curve is almost identical when using different # mixtures for the speech model, a complexity of 2 Gaussian mixtures for the speech model and 1 for silence is chosen. In contrast to the energy-based system, this second step does output a final result to be used in the diarization system, therefore it is a need to find the minimum segment duration that minimizes the total percent error. An minimum error of 5.6% was achieved using a minimum duration of 0.7 seconds. If the parameters in the energy-based detector that minimize the overall speech/non-speech error had been chosen (which is at 8000 frames, 0.5 seconds) instead of the current ones, the obtained scores would have had a minimum error of 6.0% after the cluster-based decoder step. Table 6.3: Speech/non-speech errors on development and test data sp/nsp system RT02+RT04s RT05s MISS FA total MISS FA total All-speech system 0.0% 11.4% 11.4% 0.0% 13.2% 13.2% Pre-trained models 1.9% 3.2% 5.1% 1.9% 4.6% 6.5% hybrid (1st part) 0.4% 9.7% 10.1% 0.1% 10.4% 10.5% hybrid system(all) 2.4% 3.2% 5.6% 2.8% 2.1% 4.9% In table 6.3 results are presented for the development and evaluation sets using the selected parameters, taking into account only the MISS and FA errors from the proposed module. Used as comparison, the all-speech'' system shows the total percentage of data labelled as non-speech in the reference (ground truth) files. After obtaining the forced alignment from the STT system, there existed many non-speech segments with a very small duration due to the strict application of the 0.3s minimum pause duration rule to the forced alignment segmentations. The second row shows the speech/non-speech results using SRI speech/non-speech system (Stolcke et al., 2005) which is was developed using training data coming from various meeting sources and its parameters optimized using the development data presented here and the forced alignment reference files. If tuned using the hand annotated reference files provided by NIST for each data set, it obtains a much bigger FA rate, possibly due to the fact that it is more complicated in hand annotated data to follow the 0.3s silence rule. The third and forth rows belong to the results for the presented algorithm. The third row shows the errors in the intermediate stage of the algorithm, after the energy-based decoding. These are not comparable with the other systems as the optimization in here is done regarding the MISS error, and not the TOTAL error. The forth row shows the result of the final output from both systems together. Although the speech/non-speech error rate obtained for the development set is worse than what is obtained using the pre-trained system, it is almost a 25% relative better in the evaluation set. This changes when considering the final DER. In order to test the usability of such speech/non-speech output for the speaker diarization of meetings data the baseline system was used interposing either of the three speech/non-speech modules shown in table 6.3. Table 6.4: DER using different speech/non-speech systems sp/nsp system Development evaluation All-speech 27.50% 25.17% Pre-trained models 19.24% 15.53% hybrid system 16.51% 13.97% It is seen in 6.4 that the use of any speech/non-speech detection algorithm improves the performance of the speaker diarization system. Both systems perform much better than just using the diarization system alone. This is due to the agglomerative clustering technique, which starts with a large amount of speaker clusters and tries to converge to an optimum number of clusters via cluster-pair comparisons. As non-speech data is distributed among all clusters, the more non-speech they contain, the less discriminative the comparison is, leading to more errors. In both the development and evaluation sets the final DER of the proposed speech/non-speech system outperforms by a 14% relative (development) and a 10% relative (evaluation) the system using pre-trained models. It can be seen how the DER on the development set is much better that the pretrained system, even though the proposed system has a worse speech/non-speech error. This indicates that the proposed system obtains a set of speech/non-speech segments that are more tightly coupled with the diarization system. user 2008-12-08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271676898002625, "perplexity": 1142.1316743532745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119838.12/warc/CC-MAIN-20170423031159-00061-ip-10-145-167-34.ec2.internal.warc.gz"}
http://www.ck12.org/algebra/Slope-of-a-Line-Using-Two-Points/lesson/Slopes-of-Lines-from-Two-Points-Honors/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # Slope of a Line Using Two Points ## Find the slope of a line when two points have been graphed on the coordinate plane. 0% Progress Practice Slope of a Line Using Two Points Progress 0% Slopes of Lines from Two Points Can you determine the slope of the line with an \begin{align*}x\end{align*}-intercept of 4 and \begin{align*}y\end{align*}-intercept of –3? ### Guidance The slope of a line is the steepness, slant or gradient of that line. Slope is defined as \begin{align*}\frac{\text{rise}}{\text{run}}\end{align*} (rise over run) or \begin{align*}\frac{\Delta y}{\Delta x}=\frac{y_2-y_1}{x_2-x_1}\end{align*} (change in \begin{align*}y\end{align*} over change in \begin{align*}x\end{align*}). Whatever definition of slope is used, they all mean the same. The slope of a line is represented by the letter ‘\begin{align*}m\end{align*}’ and its value is a real number. You can calculate the slope of a line by using the coordinates of two points on the line. Consider a line that passes through the points \begin{align*}A (-6, -4)\end{align*} and \begin{align*}B (3, -8)\end{align*}. The slope of this line can be determined by finding the change in \begin{align*}y\end{align*} over the change in \begin{align*}x\end{align*}. The formula that is used is \begin{align*}m=\frac{y_2-y_1}{x_2-x_1}\end{align*} where ‘\begin{align*}m\end{align*}’ is the slope, \begin{align*}(x_1,y_1)\end{align*} are the coordinates of the first point and \begin{align*}(x_2,y_2)\end{align*} are the coordinates of the second point. The choice of the first and second point will not affect the result. #### Example A Determine the slope of the line passing through the pair of points(–3, –8) and (5, 8). Solution: To determine the slope of a line from two given points, the formula \begin{align*}m=\frac{y_2-y_1}{x_2-x_1}\end{align*} can be used. Don’t forget to designate your choice for the first and the second point. Designating the points will reduce the risk of entering the values in the wrong location of the formula. #### Example B Determine the slope of the line passing through the pair of points \begin{align*}(9, 5)\end{align*} and \begin{align*}(-1, 6)\end{align*}. Solution: #### Example C Determine the slope of the line passing through the pair of points \begin{align*}(-2, 7)\end{align*} and \begin{align*}(-3, -1)\end{align*}. Solution: #### Concept Problem Revisited Determine the slope of the line with an \begin{align*}x\end{align*}-intercept of 4 and \begin{align*}y\end{align*}-intercept of –3. ### Vocabulary Slope The slope of a line is the steepness, slant or gradient of that line. Slope is defined as \begin{align*}\frac{\text{rise}}{\text{run}}\end{align*} (rise over run) or \begin{align*}\frac{\Delta y}{\Delta x}=\frac{y_2-y_1}{x_2-x_1}\end{align*} (change in \begin{align*}y\end{align*} over change in \begin{align*}x\end{align*}). ### Guided Practice Calculate the slope of the line that passes through the following pairs of points: 1. (5, –7) and (16, 3) 2. (–6, –7) and (–1, –4) 3. (5, –12) and (0, –6) 4. The local Wine and Dine Restaurant has a private room that can serve as a banquet facility for up to 200 guests. When the manager quotes a price for a banquet she includes the cost of the room rent in the price of the meal. The price of a banquet for 80 people is $900 while one for 120 people is$1300. i) Plot a graph of cost versus the number of people. ii) What is the slope of the line and what meaning does it have for this situation? 1. The slope is \begin{align*}\frac{10}{11}\end{align*}. 2. The slope is \begin{align*}\frac{3}{5}\end{align*}. 3. The slope is \begin{align*}-\frac{6}{5}\end{align*}. 4. The domain for this situation is \begin{align*}N\end{align*}. However, to demonstrate the slope and its meaning, it is more convenient to draw the graph as \begin{align*}x \ \varepsilon \ R\end{align*} instead of showing just the points on the Cartesian grid. The \begin{align*}x\end{align*}-axis has a scale of 10 and the \begin{align*}y\end{align*}-axis has a scale of 100. The slope can be calculated by counting to determine \begin{align*}\frac{\text{rise}}{\text{run}}\end{align*}. From the point to the left, run four spaces (40) in a positive direction and move upward four spaces (400) in a positive direction. The slope represents the cost of the meal for each person. It will cost $10 per person for the meal. ### Practice Calculate the slope of the line that passes through the following pairs of points: 1. (3, 1) and (–3, 5) 2. (–5, –57) and (5, –5) 3. (–3, 2) and (7, –1) 4. (–4, 2) and (4, 4) 5. (–1, 5) and (4, 3) 6. (0, 2) and (4, 1) 7. (12, 15) and (17, 3) 8. (2, –43) and (2, –14) 9. (–16, 21) and (7, 2) The cost of operating a car for one month depends upon the number of miles you drive. According to a recent survey completed by drivers of midsize cars, it costs$124/month if you drive 320 miles/month and $164/month if you drive 600 miles/month. 1. Plot a graph of distance/month versus cost/month. 2. What is the slope of the line and what does it represent? A Glace Bay developer has produced a new handheld computer called the Blueberry. He sold 10 computers in one location for$1950 and 15 in another for $2850. The number of computers and the cost forms a linear relationship. 1. Plot a graph of number of computers sold versus cost. 2. What is the slope of the line and what does it represent? Shop Rite sells one-quart cartons of milk for$1.65 and two-quart cartons for $2.95. Assume there is a linear relationship between the volume of milk and the price. 1. Plot a graph of volume of milk sold versus cost. 2. What is the slope of the line and what does it represent? Some college students, who plan on becoming math teachers, decide to set up a tutoring service for high school math students. One student was charged$25 for 3 hours of tutoring. Another student was charged \$55 for 7 hours of tutoring. The relationship between the cost and time is linear. 1. Plot a graph of time spent tutoring versus cost. 2. What is the slope of the line and what does it represent? ### Answers for Explore More Problems To view the Explore More answers, open this PDF file and look for section 4.2. ### Vocabulary Language: English Slope Slope Slope is a measure of the steepness of a line. A line can have positive, negative, zero (horizontal), or undefined (vertical) slope. The slope of a line can be found by calculating “rise over run” or “the change in the $y$ over the change in the $x$.” The symbol for slope is $m$
{"extraction_info": {"found_math": true, "script_math_tex": 34, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 3, "texerror": 0, "math_score": 0.7374992370605469, "perplexity": 980.0466111416829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449793.41/warc/CC-MAIN-20151124205409-00037-ip-10-71-132-137.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/192573/continuity-on-open-sets
# continuity on open sets Let $f:A\rightarrow\Bbb R$, $A\subset\Bbb R$ and any $c \in \Bbb R$ If $E^-=\{x \in A :f(x)< c\}$ and $E^+=\{x\in A:f(x)>c\}$ are open sets, then $f:A\rightarrow \Bbb R$ is continuous. - What are you asking? – Cameron Buie Sep 7 '12 at 23:30 ## 2 Answers Let $a,b \in \mathbb{R}$ such that $a < b$. $E^{+}_a = \{x \in A : f(x) > a\} = f^{-1}((a, \infty))$ and $E^-_b = \{x \in A : f(x) < b\} = f^{-1}((-\infty, b))$ are open by the assumption. Hence $f^{-1}((a,b)) = f^{-1}((a, \infty) \cap (-\infty, b)) = f^{-1}((a,\infty)) \cap f^{-1}((-\infty, b))$ is open. All open subset $U$ of $\mathbb{R}$ is a union of open sets of the form $(a,b)$. $f^{-1}(U)$ is open. The inverse image of any open set under $f$ is open. Hence $f$ is continuous. - Let $a, b \in \mathbb{R}$ such that $a < b$. Then $f^{-1}((a, b))$ is open by the assumption. Any open subset $U$ of $\mathbb{R}$ is a union of subsets of the form $(a, b)$. Hence $f^{-1}(U)$ is open. Hence $f$ is continuous. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850808382034302, "perplexity": 162.76913130830673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701153736.68/warc/CC-MAIN-20160205193913-00274-ip-10-236-182-209.ec2.internal.warc.gz"}
http://swmath.org/?term=supersymmetric%20gauge%20theory
• # LanHEP • Referenced in 20 articles [sw00502] • automatic generation of Feynman rules in field theory Version 3.0. The LanHEP program version ... derivative and strength tensor for gauge fields. Supersymmetric theories can be described using the superpotential... • # Spheno • Referenced in 41 articles [sw09544] • supersymmetric particle spectrum within a high scale theory, such as minimal supergravity, gauge mediated supersymmetry ... calculate decay widths and branching ratios of supersymmetric particles as well as of Higgs bosons... • # SARAH • Referenced in 36 articles [sw06472] • addition, the tadpole equations are calculated, gauge fixing terms can be given and ghost interactions ... integrated out and non-supersymmetric limits of the theory can be chosen. CP and flavor... • # SUSY LATTICE • Referenced in 4 articles [sw16830] • four-dimensional 𝒩=4 supersymmetric Yang-Mills theory with gauge group SU (N). The lattice ... large-scale framework despite the different target theory. Many routines are adapted from an existing ... object oriented code for simulating supersymmetric Yang-Mills theories”, ibid ... basic workflow for non-experts in lattice gauge theory. We discuss the parallel performance... • # PyR@TE • Referenced in 8 articles [sw16617] • group equations for a general gauge field theory have been known for quite some time ... once the user specifies the gauge group and the particle content of the model ... renormalization group equations for several non-supersymmetric extensions of the Standard Model and found some...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8107593655586243, "perplexity": 2415.0833176023875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541308604.91/warc/CC-MAIN-20191215145836-20191215173836-00296.warc.gz"}
http://www.edurite.com/kbase/graphical-representation-of-motion
#### • Class 11 Physics Demo Explore Related Concepts # graphical representation of motion From Wikipedia Axis-angle representation The axis-angle representation of a rotation, also known as the exponential coordinates of a rotation, parameterizes a rotation by two values: a unit vector indicating the direction of a directed axis (straight line), and an angle describing the magnitude of the rotation about the axis. The rotation occurs in the sense prescribed by the right-hand rule. This representation evolves from Euler's rotation theorem, which implies that any rotation or sequence of rotations of a rigid body in a three-dimensional space is equivalent to a pure rotation about a single fixed axis. The axis-angle representation is equivalent to the more concise rotation vector, or Euler vector representation. In this case, both the axis and the angle are represented by a non-normalized vector codirectional with the axis whose magnitude is the rotation angle. Rodrigues' rotation formula can be used to apply to a vector a rotation represented by an axis and an angle. ## Uses The axis-angle representation is convenient when dealing with rigid body dynamics. It is useful to both characterize rotations, and also for converting between different representations of rigid body motion, such as homogeneous transformations and twists. ### Example Say you are standing on the ground and you pick the direction of gravity to be the negative z direction. Then if you turn to your left, you will travel \tfrac{\pi}{2} radians (or 90 degrees) about the z axis. In axis-angle representation, this would be \langle \mathrm{axis}, \mathrm{angle} \rangle = \left( \begin{bmatrix} a_x \\ a_y \\ a_z \end{bmatrix},\theta \right) = \left( \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix},\frac{\pi}{2}\right) This can be represented as a rotation vector with a magnitude of \tfrac{\pi}{2} pointing in the z direction. \begin{bmatrix} 0 \\ 0 \\ \frac{\pi}{2} \end{bmatrix} ## Rotating a vector Rodrigues' rotation formula (named after Olinde Rodrigues) is an efficient algorithm for rotating a vector in space, given a rotation axis and an angle of rotation. In other words, the Rodrigues formula provides an algorithm to compute the exponential map from so(3) to SO(3) without computing the full matrix exponent (the rotation matrix). If v is a vector in \mathbb{R}^3 and ω is a unit vector describing an axis of rotation about which we want to rotate v by an angle θ (in a right-handed sense), the Rodrigues formula to obtain the rotated vector is: \mathbf{v}_\mathrm{rot} = \mathbf{v} \cos\theta + (\mathbf{\omega} \times \mathbf{v})\sin\theta + \mathbf{\omega} (\mathbf{\omega} \cdot \mathbf{v}) (1 - \cos\theta). This is more efficient than converting ω and θ into a rotation matrix, and using the rotation matrix to compute the rotated vector. ## Relationship to other representations There are many ways to represent a rotation. It is useful to understand how different representations relate to one another, and how to convert between them. ### Exponential map from so(3) to SO(3) The exponential map is used as a transformation from axis-angle representation of rotations to rotation matrices. \exp\colon so(3) \to SO(3) Essentially, by using a Taylor expansion you can derive a closed form relationship between these two representations. Given an axis, \omega \in \Bbb{R}^{3} having length 1, and an angle, \theta \in \Bbb{R}, an equivalent rotation matrix is given by the following: R = \exp(\hat{\omega} \theta) = \sum_{k=0}^\infty\frac{(\hat{\omega}\theta)^k}{k!} = I + \hat{\omega} \theta + \frac{1}{2}(\hat{\omega}\theta)^2 + \frac{1}{6}(\hat{\omega}\theta)^3 + \cdots R = I + \hat{\omega}\left(\theta - \frac{\theta^3}{3!} + \frac{\theta^5}{5!} - \cdots\right) + \hat{\omega}^2 \left(\frac{\theta^2}{2!} - \frac{\theta^4}{4!} + \frac{\theta^6}{6!} - \cdots\right) R = I + \hat{\omega} \sin(\theta) + \hat{\omega}^2 (1-\cos(\theta)) where R is a 3x3 rotation matrix and the hat operator gives the antisymmetric matrix equivalent of the cross product. This can be easily derived from Rodrigues' rotation formula. ### Log map from SO(3) to so(3) To retrieve the axis-angle representation of a rotation matrix calculate the angle of rotation: \theta = \arccos\left( \frac{\mathrm{trace}(R) - 1}{2} \right) and then use it to find the normalized axis: \omega = \frac{1}{2 \sin(\theta)} \begin{bmatrix} R(3,2)-R(2,3) \\ R(1,3)-R(3,1) \\ R(2,1)-R(1,2) \end{bmatrix} Note, also that the Matrix logarithm of the rotation matrix R is: \log R = \left\{ \begin{matrix} 0 & \mathrm{if} \; \theta = 0 \\ \frac{\theta}{2 \sin(\theta)} (R - R^\top) & \mathrm{if} \; \theta \ne 0 \; \mathrm{and} \; \theta \in (-\pi, \pi) \end{matrix}\right. Except when R has eigenvalues equal to -1 where the log is not unique. However, even in the case where \theta = \pi the Frobenius norm of the log is: \| \log(R) \|_F = \sqrt{2} | \theta | Note that given rotation matrices A and B: d_g(A,B) := \| \log(A^\top B)\|_F is the geodesic distance on the 3D manifold of rotation matrices. ### Unit Quaternions To transform from axis-angle coordinates to unit quaternions use the following expression: Q = \left(\cos\left(\frac{\theta}{2}\right), \omega \sin\left(\frac{\theta}{2}\right)\right) Perspective (graphical) Perspective (from Latin perspicere, to see through) in the graphic arts, such as drawing, is an approximate representation, on a flat surface (such as paper), of an image as it is seen by the eye. The two most characteristic features of perspective are that objects are drawn: Smaller as their Question:I heard all the advantage of graphic representation of equation. what are the disadvantage of it? Answers:limited range and limited accuracy. If you graph y = x on graph paper where x runs between 10 and +10, then the value of y cannot be determined if x = 11. Furthermore, the accuracy is limited to the resolution of the graph paper. So you could read y = 4.02 for x = 2 One more disadvantage: difficult to read, need training on how to read graphs. And another, graph paper is fragile, some water and your graph is gone. And another: you can't email it. . Question:I have a project where I need a graphic representation, basically a picture or a graph, of something that has to do with the Cuban Missile Crisis. I need to make it myself, so I cannot copy and paste something. I already made a time line of the events that occurred, so what else could I do? Answers:Don't forget to present the fact that Kennedy left fidel kastro ( a sworn enemy of the USA ) alive and well, in charge just 90 miles south of the USA border. For almost 50 years fidel has done its best to damage the USA. Had not been by Mikhail Gorbachev, who unwillingly pulverized comunism in Russia, fidel would be knocking at the door of the white house and telling bush to beat it, because it was taking charge Question:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9918633103370667, "perplexity": 964.9098715642513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042992543.60/warc/CC-MAIN-20150728002312-00040-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/particle-physics-higgs-mechanism-spontaneous-symmetry-breaking-gauge-invariance.609143/
# Homework Help: [particle physics] Higgs mechanism: spontaneous symmetry breaking & gauge invariance 1. May 26, 2012 ### nonequilibrium 1. The problem statement, all variables and given/known data Suppose that there is a gauge group with 24 indepenent symmetries and we find a set of 20 real scalar fields such that the scalar potential has minima that are invariant under only 8 of these symmetries. Using the Brout-Englert-Higss mechanism, how many physical fields are there that are - massive spin 1 - massless spin 1 - Goldstone scalars - Higgs scalars 2. Relevant equations N.A. 3. The attempt at a solution I'm not sure since I only saw an example worked out where there was a gauge "group" with 1 symmetry and we had 2 real scalar fields and the 1 symmetry was broken, and it ended up giving one massive spin 1, zero massless spin 1, zero Goldstone scalars and 1 Higgs scalar. I'm not sure how to generalize this to a more general case. But let's give it a try: if we assume that for each broken symmetry, a gauge boson gets mass, we end up with "16 massive spin 1" (since 24 - 8 symmetries are broken). Hence "8 massless spin 1" remain. If I now presume that each gauge boson getting a mass is accompanied with the eating of one Goldstone scalar (which seems sensible from the perspective of the gauge boson gaining one degree of freedom), 16 Goldstone scalars have been eaten, and presuming that no (physical) Goldstone scalars can remain (?) (i.e. "0 Goldstone scalars"), we conclude that from the 20 real scalar fields, "4 Higgs scalars" survive. Is the answer and/or some of the reasoning correct? Maybe I'm making it too complicated... (for reference we're using Griffiths' Introduction to Elementary Particles, although note that the question is not in the book itself).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566444754600525, "perplexity": 1030.1909461267192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832330.93/warc/CC-MAIN-20181219130756-20181219152756-00159.warc.gz"}
https://indico.cern.ch/event/558411/program
# The 4th NPKI Workshop, "Searching for New Physics on the Horizon" May 12 – 17, 2019 Korea University Asia/Seoul timezone ## Scientific Program See the following table (indico compact style), or see the following file.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849064111709595, "perplexity": 26481.142381336853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573053.67/warc/CC-MAIN-20220524142617-20220524172617-00360.warc.gz"}
https://sciencehouse.wordpress.com/2013/11/
# Fred Sanger 1918 – 2013 Perhaps the greatest biologist of the twentieth century and two-time Nobel prize winner, Fred Sanger, has died at the age of 95. He won his first Nobel in 1958 for determining the amino acid sequence of insulin and his second in 1980 for developing a method to sequence DNA.  An obituary can be found here. # Michaelis-Menten kinetics This year is the one hundred anniversary of the Michaelis-Menten equation, which was published in 1913 by German born biochemist Leonor Michaelis and Canadian physician Maud Menten. Menten was one of the first women to obtain a medical degree in Canada and travelled to Berlin to work with Michaelis because women were forbidden from doing research in Canada. After spending a few years in Europe she returned to the US to obtain a PhD from the University of Chicago and spent most of her career at the University of Pittsburgh. Michaelis also eventually moved to the US and had positions at Johns Hopkins University and the Rockefeller University. The Michaelis-Menten equation is one of the first applications of mathematics to biochemistry and perhaps the most important. These days people, including myself, throw the term Michaelis-Menten around to generally mean any function of the form $f(x)= \frac {Vx}{K+x}$ although its original derivation was to specify the rate of an enzymatic reaction.  In 1903, it had been discovered that enzymes, which catalyze reactions, work by binding to a substrate. Michaelis took up this line of research and Menten joined him. They focused on the enzyme invertase, which catalyzes the breaking down (i.e. hydrolysis) of the substrate sucrose (i.e. table sugar) into the simple sugars fructose and glucose. They modelled this reaction as $E + S \overset{k_f}{\underset{k_r}{\rightleftharpoons}} ES \overset{k_c}{\rightarrow }E +P$ where the enzyme E binds to a substrate S to form a complex ES which releases the enzyme and forms a product P. The goal is to calculate the rate of the appearance of P. # Talk in Taiwan I’m currently at the National Center for Theoretical Sciences, Math Division, on the campus of the National Tsing Hua University, Hsinchu for the 2013 Conference on Mathematical Physiology.  The NCTS is perhaps the best run institution I’ve ever visited. They have made my stay extremely comfortable and convenient. Here are the slides for my talk on Correlations, Fluctuations, and Finite Size Effects in Neural Networks.  Here is a list of references that go with the talk E. Hildebrand, M.A. Buice, and C.C. Chow, Kinetic theory of coupled oscillators,’ Physical Review Letters 98 , 054101 (2007) [PRL Online] [PDF] M.A. Buice and C.C. Chow, Correlations, fluctuations and stability of a finite-size network of coupled oscillators’. Phys. Rev. E 76 031118 (2007) [PDF] M.A. Buice, J.D. Cowan, and C.C. Chow, ‘Systematic Fluctuation Expansion for Neural Network Activity Equations’, Neural Comp., 22:377-426 (2010) [PDF] C.C. Chow and M.A. Buice, ‘Path integral methods for stochastic differential equations’, arXiv:1009.5966 (2010). M.A. Buice and C.C. Chow, `Effective stochastic behavior in dynamical systems with incomplete incomplete information.’ Phys. Rev. E 84:051120 (2011). MA Buice and CC Chow. Dynamic finite size effects in spiking neural networks. PLoS Comp Bio 9:e1002872 (2013). MA Buice and CC Chow. Generalized activity equations for spiking neural networks. Front. Comput. Neurosci. 7:162. doi: 10.3389/fncom.2013.00162, arXiv:1310.6934. Here is the link to relevant posts on the topic.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3587416410446167, "perplexity": 3810.0772377042485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321696.96/warc/CC-MAIN-20190824194521-20190824220521-00197.warc.gz"}
https://salttheoats.wordpress.com/2014/05/21/ap-physics-one-storyline/
# AP Physics 1 Storyline As I prepare for the transition to the AP 1 course, I’ve taken this school year (2013-2014) to begin trying some things out. Based on what I’ve tried, here is the storyline I’m going to use with my AP 1 students next year. Before I get into the details, I do plan to use Modeling Instruction throughout the course. If you haven’t had the chance to take a workshop, do yourself a favor and find one. Also, I plan to make future posts providing more detail for each unit. We begin the year jumping right into the Constant Velocity Particle Model (CVPM). The students at my school come out of chemistry, and for the most part have decent skills when it comes to doing labs. Although we don’t use modeling in our other science classes, they have a majority of the basic skills. So I save a little time by skipping the Scientific Methods Unit. Our second unit, then progresses to the Constant Acceleration Particle Model (CAPM). As I mentioned, I plan to give more detail later, but for those familiar with the materials provided, I’m not doing that much different from those documents. The third unit is where I make my first big adjustment from the traditional modeling curriculum. After reading numerous posts from some of the bloggers I admire the most (read Momentum is King, Kelly O’Shea’s blog, and more recently Mazur’s Physics Textbook), I decided to try out teaching momentum before Newton’s Laws. During this third unit, Momentum Transfer Model (MTM), we focus on interaction diagrams and the swapping of momentum as the mechanism of physical interactions. We stress the choosing of a system, and that momentum swaps within the system, or swaps out of the system as an impulse. In the end, we are building the concept of Newton’s Third Law. In addition to what I call Interaction Diagrams (others call system schema), we also introduce the Momentum Diagrams (IF Charts). We hold off on discussing collsions in great detail until after impulses are further studied with unit 5. The fourth unit, Balanced Forces Particle Model (BFPM), then begins to bring in the concept of forces as the rate of swapping momentum. Here we introduce the major contact forces: normal, tension, friction (name not equation) and the non-contact gravitational force. We also begin using force diagrams to determine if the forces are balanced or not. We stress one way of understanding Newton’s 1st Law as “Balanced Forces -> no acceleration, Unbalanced Forces -> acceleration.” In the fifth unit, Unbalanced Forces Particle Model (UFPM), we now get into Newton’s 2nd Law in two ways. One the classic: $a=\frac{F_{net}}{m}$ And two, we build the parallel between kinematics and Newton’s Laws. In kinematics, the slope of a position-time graph gives velocity-time, the slope of velocity-time gives the acceleration-time. Finding the area allows us to go the other way. The same is then true of momentum and forces. The slope of a momentum-time graph gives Force vs. time, while the area under a Force vs. time graph give the change in momentum (impulse). For those students going on to calculus based physics, this helps lay the ground work. For the rest, it shows a nice connection between these different models. With this new information, we can now add a Force vs. time graph into the momentum graphs and make “IFF” graphs. Other features of the unit are the building of the equation for friction in relation to the normal force, and the independence of components by looking at 2D projectile motion problems. We then wrap up the first semester with our 6th unit, Energy Transfer Model (ETM). After building the concept of energy storage through Energy Diagrams (LOL Diagrams). We discover that Energy storage is a “cheat” to help us solve more complex problems, since it is a second conserved quantity. We come back to collisions and find that, in elastic collisions, we can now build a second conservation equation: 1. momentum (IF charts) and, 2. energy (LOL diagrams). As a review of the first semester, the students will then have to build a paper car that will hold an egg inside. They will have two tests: 1) a speed test to see who has the fastest car and, 2) a crash test to see who has the safest car. During the design, they must make use of all the models we have built this semester. To start the second semester, we begin studying the Central Force Particle Model (CFPM), or what most people would call uniform circular motion. In this unit we also add in building the concepts of Newton’s Universal Gravitation and satellite motion. In unit 8 we now move onto full rotational motion, the Rotating Bodies Model (RBM).  To be honest, I may try to split this up into two units, as it’s got a lot of stuff going on here. In short, within this unit we retrace units 1-6, but in the rotating or polar frame of reference. We begin with rotational kinematics ($\theta$  vs. t, $\omega$  vs. t, and $\alpha$ vs. t). Afterwords, we build in dynamics with angular momentum, torque, and rotational energy storage. In unit 9, we move on to harmonic motion with the Oscillating Particle Model (OPM). Overall, we stay pretty true to the model materials here. We start with looking at a bouncing mass hanging by a spring. We later bring in pendulum motion. In unit 10, we then move on to the Mechanical Wave Model (MWM) in which we build a mental model of coupled oscillators. From what I can tell AP-1 only focuses on one dimensional waves, so we looked at boundary effects: reflection (open/fixed) and refraction. We also build in wave superposition. We begin looking at sound waves and doppler shifts as further examples of waves. At least so far, I don’t build in diffraction through “narrow” slits or 2D interference patterns. In the last unit, we then look at circuits in what I call the Charge Flow Model (CFM). We begin by looking at sticky tape activities to introduce the electric force and electric energy. During that discussion we bring in the concept of gravitational potential ($gh$), to help understand the concept of electric potential ($V$). From there, we have them build simple circuits with lightbulbs, then move onto simple circuits with fixed resistors while measuring the current (flowrate of charge). We eventually get to adding multiple resistors in series and in parallel and try to create a model that explains how the resistors add in these two different ways. To review the entire year, we then do a video analysis project in which the students must analyze movie, tv, or internet videos and determine how feasible those scenes actually are. Here is an example from which I got my idea: Advertisements
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.722088098526001, "perplexity": 786.985525418823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947795.46/warc/CC-MAIN-20180425100306-20180425120306-00274.warc.gz"}
https://ashpublications.org/blood/article/124/21/4339/92104/Natural-Ligands-for-Rxra-but-Not-Rara-Are-Observed
## Abstract The retinoid receptors RARA and RXRA are both important transcription factors that influence hematopoietic cell growth and differentiation. RARA and RXRA are both ligand-dependent transcription factors. The natural ligand for RARA is thought to be all-trans retinoic acid (ATRA), the natural ligand for RXRA is unclear, although 9-cis retinoic acid is an active ligand in vitro. Aldehyde dehydrogenase (ALDH) metabolism is the rate-limiting step in ATRA synthesis, and ALDH activity is associated with stem cell self-renewal in hematopoietic stem cell (HSCs) and in cancer stem cells. It is unknown whether these two functions are related. In order to measure the presence and regulation of natural retinoids in vivo, we developed a UAS-GFP reporter mouse. We found this system to be highly sensitive and specific. We transplanted UAS-GFP mouse bone marrow cells with virus expressing either Gal4-RARA-IRES-mCherry (Gal4-RARA-IC) or Gal4-RXRA-IRES-mCherry (Gal4-RXRA-IC). Because PML-RARA is proposed to act as a dominant-negative, and ATRA induces significant differentiation of myeloid cells, we were surprised to observe no GFP expression in mice transplanted with Gal4-RARA-IC. Instead, we observed GFP in mice transplanted with Gal4-RXRA-IC, consistent with natural RXRA ligands, but not RARA ligands, in bone marrow cells. When we treated mice with either ATRA or bexarotene, we observed that most hematopoietic cell types can respond to an active retinoid, with the exception of Kit+Lin-Sca+ HSCs, which had a significantly attenuated response. Ex vivo, we found that the P450 inhibitors liarozole and talarozole augmented response to retinoids in Kit+ hematopoietic stem/progenitor cells, suggesting that HSCs have high rates of retinoid degradation via active P450 pathways, and thus maintain a retinoid deplete environment. We further tested whether hematopoietic cells might respond to hematopoietic stress through retinoid receptor signaling. We observed a significant increase in the number of GFP+ cells when Gal4-RXRA-IC transplanted mice were treated with either 5FU or GCSF, but no response in Gal4-RARA-IC transplanted mice. As a control, we repeated the studies using a Gal4-RXRA vector with the AF2 domain deleted. The RXRAdeltaAF2 mutation can bind to ligand, but does not respond to it, although it still can be activated through a heterodimeric partner. We observed no GFP induction by 5FU or GCSF with the RXRAdeltaAF2 mutation, suggesting that the GFP response is to natural RXRA ligands, and not to alternative signaling through a heterodimeric partner (e.g. Lxra or Pparg). These data suggest that HSCs maintain low levels of natural retinoids, and that the mechanism of stem-cell associated ALDH is therefore not through ATRA production and RARA activation. In addition, bone marrow cells are exposed to natural RXRA ligands, but not RARA ligands, under homeostatic conditions, and this increases during response to 5FU and GCSF, suggesting that ligand-dependent RXRA activation may play a critical role in hematopoietic response to 5FU and GCSF. Disclosures No relevant conflicts of interest to declare. ## Author notes * Asterisk with author names denotes non-ASH members.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150454998016357, "perplexity": 15344.310197114164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201601.26/warc/CC-MAIN-20200921081428-20200921111428-00164.warc.gz"}
https://publikationen.bibliothek.kit.edu/1000054005
# Correlations between jets and charged particles in PbPb and pp collisions at sNN=2.76 √snn= 2.76TeV CMS Collaboration; Khachatryan, V.; Sirunyan, A.M.; Tumasyan, A.; Adam, W.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V.M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Knünz, V.; König, A.; ... mehr Abstract (englisch): The quark-gluon plasma is studied via medium-induced changes to correlations between jets and charged particles in PbPb collisions compared to pp reference data. This analysis uses data sets from PbPb and pp collisions with integrated luminosities of 166 μ b −1 and 5.3 pb −1 , respectively, collected at s N N = 2.76 sNN−−−√=2.76 TeV. The angular distributions of charged particles are studied as a function of relative pseudorapidity (Δ η ) and relative azimuthal angle (Δ ϕ ) with respect to reconstructed jet directions. Charged particles are correlated with all jets with transverse momentum ( p T ) above 120 GeV, and with the leading and subleading jets (the highest and second-highest in p T , respectively) in a selection of back-to-back dijet events. Modifications in PbPb data relative to pp reference data are characterized as a function of PbPb collision centrality and charged particle p T . A centrality-dependent excess of low- p T particles is present for all jets studied, and is most pronounced in the most central events. This excess of low- p T particles follows a Gaussian-like distribution around the jet axis, and extends to l ... mehr Zugehörige Institution(en) am KIT Institut für Experimentelle Kernphysik (IEKP) Publikationstyp Zeitschriftenaufsatz Jahr 2016 Sprache Englisch Identifikator ISSN: 1126-6708 URN: urn:nbn:de:swb:90-540050 KITopen-ID: 1000054005 Erschienen in Journal of High Energy Physics Band 2016 Heft 2 Seiten 1-39 Bemerkung zur Veröffentlichung Gefördert durch SCOAP3 Nachgewiesen in Web of ScienceScopus KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft KITopen Landing Page
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9454455971717834, "perplexity": 20923.04117724999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823895.25/warc/CC-MAIN-20181212134123-20181212155623-00322.warc.gz"}
http://mathoverflow.net/questions/58673/finite-quotients-of-fundamental-groups-in-positive-characteristic?sort=oldest
# finite quotients of fundamental groups in positive characteristic For affine smooth curves over $k=\bar{k}$ of char. $p,$ Abhyankar's conjecture (proved by Raynaud and Harbater) tells us exactly which finite groups can be realized as quotients of their fundamental groups. What about complete smooth curves, or more generally higher dimensional varieties? Are there results or conjectural criteria (or necessary conditions) for finite quotients of their $\pi_1?$ (Definitely, not too much was known around 1990; see Serre's Bourbaki article on this.) In particular, let $G$ be the automorphism group of the supersingular elliptic curve in char. $p=2$ or $3$ (see supersingular elliptic curve in char. 2 or 3 for various descriptions of its structure). Is there (and if yes, how to construct) a projective smooth variety in char. $p$ having $G$ as a quotient of its $\pi_1?$ Certainly there are lots of affine smooth curves with this property (e.g. $\mathbb G_m$), and I wonder if for some of them, the covering is unramified at infinity (so that we win!). - I think you can get $G$ as a quotient of the fundamental group of any curve of genus $g>1$, since such groups have only one (topological) relation. –  S. Carnahan Mar 16 '11 at 19:59 That's my hope too. But is there any reference for an Abhyankar-type conjectural statement for complete curves? –  shenghao Mar 16 '11 at 20:19 In fact, I don't know if we know now that $\pi_1$ of projective smooth varieties (or just curves) in char. $p$ are of finite presentation in general, although in char. 0 it is the case. At least at the time when SGA1 was written this was not known; cf. SGA1, Exp.X, 2.8. –  shenghao Mar 16 '11 at 20:51 A naive Abhyankar-type statement would claim that a finite group $G$ is a quotient of $\pi_1(X)$ is $G/p(G)$ is such a quotient in characteristic zero, where $p(G)$ is the characteristic subgroup of $G$ generated by its $p$-Sylow subgroups. Unfortunately, it fails miserably already for $X$ the projective line. –  ACL Mar 17 '11 at 9:36 For a supersingular elliptic $E$ over an algebraically closed field of characteristic two or three there exists a smooth curve $C$ of higher genus such that $Aut_0(E)$ is a finite quotient of $\pi_1(C)$. In this paper, it is explained how to realize groups which have the property that their maximal $p$-Sylow subgroup ($p$ being the characteristic) is normal. The automorphism groups of supersingular elliptic curves satisfy this property.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081775546073914, "perplexity": 336.2832623171841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446943.4/warc/CC-MAIN-20141017005726-00289-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/vertical-spring-elevator-question.230620/
# Vertical spring - elevator question 1. Apr 22, 2008 ### Volcano A mass is attached to a spring supported from the ceiling of an elevator. We pull down on the mass and let it to vibrate. If the elevator starts to accelerate(fixed accelerate) upward, 1) How the maximum velocity changes? 2) How the amplitude changes? 3) How the total energy changes? I think the amplitude and maximum velocity does not change. Because the acceleration doesn't change the net force but only slide down the equilibrium point. Am i right? 2. Apr 22, 2008 ### Hootenanny Staff Emeritus I would agree with your choice with respect to the amplitude. However, in terms of the maximum velocity, it depends on your frame of reference, what are you measuring the velocity relative to. P.S. We have https://www.physicsforums.com/forumdisplay.php?f=152" for all your textbook questions. Last edited by a moderator: Apr 23, 2017 3. Apr 22, 2008 ### Crazy Tosser ___ |M| === \_\ /_/ \_\ _|_| |....| /\ |....| .| |....| .| moving upward |__.| 4. Apr 22, 2008 ### Hootenanny Staff Emeritus Your diagram is wrong, the mass is hanging down from the ceiling, inside the elevator, but thanks for your contribution anyway... 5. Apr 22, 2008 ### Hootenanny Staff Emeritus Edit: It wouldn't actually make any difference to the answer, but it's best not to confuse the matter 6. Apr 22, 2008 ### Crazy Tosser oops D= __________________ $$| \amalg| \cdot \cdot \cdot \cdot \mp \cdot \cdot \cdot \cdot | \amalg |$$ $$| \amalg| \cdot \cdot \cdot \cdot \bigcap \cdot \cdot \cdot \cdot | \amalg |$$ $$| \amalg| \cdot \cdot \cdot \cdot |M| \cdot \cdot \cdot | \amalg |$$ $$| \amalg| \cdot \cdot \cdot \cdot \bigsqcup \cdot \cdot \cdot \cdot | \amalg |$$ $$| \amalg| \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot | \amalg | \Uparrow Moving Up$$ $$| \amalg| \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot | \amalg |$$ $$| \amalg| \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot | \amalg |$$ $$| \amalg| \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot | \amalg |$$ $$| \amalg| \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot | \amalg |$$ __________________ Last edited: Apr 22, 2008 7. Apr 23, 2008 ### Volcano it is releative to elevator. But honestly i can not explain with equations. My choice is completely instinctive. By te way, it would be nice to see the pictures with post. Latex is hard for figures. 8. Apr 23, 2008 ### Hootenanny Staff Emeritus Then you are correct, if your measuring the velocity of the mass with respect to the elevator. Obviously, if the velocity is measured relative to some other 'fixed' point outside the elevator then this will not be the case. 9. Apr 23, 2008 ### Volcano I want to understand the effects of adding force and adding mass while it is vibrate. As you are approved, additional force on motion is not change the amplitude and maximum velocity. Now I wonder, how the mass change the amplitude and max velocity? Now there is not an elevator. The same spring and mass attached to the ceiling of a door instead of an elevator and vibrating. While the mass in bottom position, an additional mass attached to other one suddenly. What happens now? I think, as previous problem, the equilibrium point slides down. The amplitude will not change because net force was not change. But maximum velocity will reduce because period will increase and distance was not change. Am I right now? 10. Apr 23, 2008 ### Hootenanny Staff Emeritus I agree. 11. Apr 23, 2008 ### Volcano I mean period proportional with mass. If mass increase period will too. Now, if amplitude the same as before then the distance for quarter period is the same too. So I think, distance the same, if time increased then average velocity must reduce. As I understood you agree with about maximum velocity reduce. But I can not calculate these. Any suggestion? Similar Discussions: Vertical spring - elevator question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653540253639221, "perplexity": 1733.3180592989906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891485.97/warc/CC-MAIN-20180122153557-20180122173557-00744.warc.gz"}
http://www.math.columbia.edu/~woit/wordpress/?p=4790
# Higgs Discovery Announcement July 4 I learned via Physics World that CERN will hold a press conference on Wednesday July 4 to give an “Update on the search for the Higgs Boson”. More information has just appeared (including a press release here), showing that there will be a 2 hour seminar on the results starting at 9am Geneva time, followed by a press conference at 11am. Reports from the experiments indicate that at least one of them, if not both, will reach the 5 sigma level of significance for the Higgs signal, when they combine 2011 and 2012 data and the most sensitive channels. So, this will definitely be the long-awaited Higgs discovery announcement, and party-time for HEP physicists. One could note that the last major announcement of the discovery of a new elementary particle at CERN was also made on a Wednesday, July 4, back in 1984. That one didn’t work out so well, but things are very different now, with results from two independent experiments and a high standard of evidence. This entry was posted in Experimental HEP News. Bookmark the permalink. ### 31 Responses to Higgs Discovery Announcement July 4 1. Stan says: What particle was announced on July 4, 1984? 2. Peter Woit says: The top quark, with a mass of about 40 GeV. Only problem is that the top quark really has a mass of about 173 GeV…. 3. Long-time follower of HEP/Not Even Wrong here: A bit ahead of the news, unless you know more, Peter! (Because the press “ad” really says “update on Higgs search”.) 4. Peter Woit says: Thanks Alfons, There are some blogs out there with a policy of only discussing news approved by the relevant authorities. Not this one…. 5. Christian Takacs says: Please don’t clobber me with criticism if I sound uninformed about this… but… though I do see all these articles, charts and graphs indicating something (large particle?) may have been found, I do not see anything (outside of a desire to find the Higgs) to indicate this IS the Higgs particle, nor do I see any explanation of how this will be demonstrated if a particle IS discovered. I would also ask, Isn’t the Higgs boson/particle’s existence based on the Standard Model’s assumption that mass is granted, imparted, or virtually assigned by said particle? I just seem to be seeing lots of “There are indications of something there, if confirmed, it’s the Higgs particle” statements. I see no mention of “What if it’s just a newly discovered particle which is not the Higgs”. I would just have thought they would first want to confirm that something was found, THEN go about some method to find out if the particle passes some falsifiable testing procedure for confirming it works as advertised. 6. Peter Woit says: Christian, The Standard Model makes extremely detailed predictions about exactly what Higgs decays should look like, and the LHC experiments are carefully tuned to look for exactly these predicted signals. What they are seeing is exactly what they were looking for (with the interesting caveat that the production rate may be higher than expected, but that calculation is hard). So, either this is the Higgs, or if it’s something different, you have to explain why it is doing precisely what the Higgs was supposed to do. Anyway, the big effort from now on will be trying to more precisely measure the properties of this signal to compare to the SM prediction. 7. Anonyrat says: Continuing to subvert science, I see 🙂 8. Eastender says: So who gets the Nobel prize ……… 9. SilverSB says: What will happen 4th of Jully is just the HEP will become less interesting. LHC is built to catch a Higgs, everything else would be a bonus (if the Nature is kind enough to throw at us hints for supersymmetry or dark matter at home-made energies – something I really doubt). So, yeah, the higgs is there. We have just saw the final battle in the first 5 minutes of the movie. How anticlimatic… 10. emile says: SilverSB: your are not 5 min. into the movie. The movie has been going on for decades… If a Higgs is confirmed, then I don’t blame you from thinking that this would be anticlimactic. But if I’ve learned anything, it’s that Nature doesn’t care what we think. 11. crandles says: So is pay $7 now to get$10 back if/when a paper is published in 2012 claiming a 5 sigma discovery a good bet? Or is there too much chance that paper will wait for more information from different decay channels, and others hints of consistency with being a Higgs Boson and then be subjected to much scrutiny in peer review so that paper may not be published until 2013? Or is there too much risk that judge will read a paper saying there is 5 sigma discovery of particle that is consistent with Higgs Boson as not being sufficient to say particle is the Higgs Boson? (If such a paper isn’t sufficient, will anything ever be sufficient? and is that sufficient to ensure judge will decide that such a paper is sufficient?) (Rules say “Confirmation of the Higgs Boson particle having been observed must be published in a major scientific journal for this contract to be expired. Clarification (Jan 5th 2009): for the Higgs Boson particle to be “observed” there must be a “five sigma discovery” of the particle.”) BTW, there isn’t much liquidity on this bet at intrade.com: 37 * US $7 is only pay US$ 259 to get US $370 less US$5 per month in fees less and bank/other fees for money transfers. Also by the time someone new to intrade gets that money into their intrade account, the opportunity might be gone. So probably not worth effort and risk for someone new to intrade. (Now why do I suspect that Christian Takacs is an intrade trader?) 12. SilverSB says: emile, The LHC movie was decades in making, but only two years in playing. The Higgs is coming too soon, but the things are what they are. Sure, Nature doesn’t care what we think and what we want – it’s not something we learned. It’s something most people should understand. 13. christian M says: Hi there, Nice and happy news. Just one comment/question. We find allmost everywhere the misleading information that the Higgs boson gives mass. That is untrue in my opinion. The Higgs boson is the trace left by something which gives mass. But not the boson itself. The scalar field responsible for giving mass has four degrees of freedom, three of which give mass to the W and Z. The fourth degree does not do anything. That’s the Higgs boson. It is the remnant of this process. It’s just an excitation mode of the scalar field, but it is the latter which gives mass. That does not mean that it is not important to find the Higgs. It’s like finding a trace left in the sand by a dinosaur : it proves that the dinosaurs existed. 14. Henry Bolden says: 15. DB says: The Higgs will be the first fundamental boson discovered whose spin is not equal to 1. And the mass of 125GeV makes the building of a muon collider to probe the properties of the Higgs in fine detail a no-brainer. It also raises serious questions over the need for the CLIC upgrades to the LHC. 16. Henry Bolden says: I’m hearing from someone (who does not wish to be named) who heard from someone else at the Perimeter Institute (whose name was not revealed to me) that the announcement on July 4 involves a Higgs which is NOT a Standard Model Higgs. Anyone else hearing this? 17. Speculative says: Henry, I’d be wary of anyone who is saying right now that we know the Higgs they’re seeing is beyond the Standard Model. It will take a lot of careful measurements before we know for sure. If there is something about this particle that distinguishes it significantly from the SM Higgs that would certainly be very interesting and it might not even be the Higgs. My attitude is that we’ll just have to wait and see until July 4th when more official data is released. 18. Anonyrat says: It is time for corporate sponsorship, e.g., just like the MetLife Stadium, or the Citi Bank Arena, we could have corporations have naming rights on particles, such as the Disney Strange, the Dow-Jones Up, the Balenciaga Top, the Huggies Higgs. The proceeds of the sponsorships would go to support impecunious Superstringers, who promise to bring a whole lot of new particles and hence sponsorship opportunities, to the table. At least it would give some motivation for research. Some names might be already taken, such as Selectron Technologies’ Selectron. Political parties might jump into the fray, such as the G.O.P. Dilaton. Social groups might enter the bidding too, such as the LGBT Spartner. The possibilities are limitless, we could further distinguish particles in different superstring vacua. With 10^500 possibilities, there will be enough for sponsorship by all the conceivable corporations the visible universe will ever contain. Hell, we can give each corporation sponsorship of an entire universe, why just a measly elementary particle? 19. Tony Smith says: If the thing at 125 GeV “is NOT a Standard Model Higgs” then can they distinguish it from a non-Higgs particle such as for example a technipion like that proposed by Eichten, Lane, Martin, and Pilon in arXiv 1206.0186 (in the context of the CDF Wjj bump) ? If it is not so distinguishable (and therefore not clearly any kind of Higgs), then what should they call it ? Tony 20. Peter Woit says: Henry, The signal seen in 2011 was already larger than the SM prediction (with large errors). The rumor that this year’s gamma-gamma signal is of similar size indicates that when they announce discovery next week, the size of the signal seen will not only be more than 5 sigma away from null, but also larger than the SM prediction. There will be signals though in multiple channels: gamma-gamma, as well as ZZ to 4 leptons. The size of the signal is the product of the Higgs cross section x branching ratio. Whatever is observed, undoubtedly there will be dozens of theory papers promoting models supposedly explaining it. I’d love to hear from a Higgs phenomenologist about how good the SM Higgs cross-section calculation is, and what to look for in terms of deviations of the the signal sizes from the nominal SM predictions. Peter, Not unusually you have hit the nail on the head–the things I have been wondering–and 3 times I might add: (1) there are apparently hints of SM discrepancies in branching ratios–when does this become signficant?; (2) uninitiated hep-ph folks like I would like to know the SM calculational “error bars” for such [fully of course realizing that the experimental notion of an error bar does not really apply]; and (3) [at the risk of an understandably deletable ad hominem comment] I for one cannot fathom what particular axe MS has chosen to grind (this time). 22. Anonyrat says: 23. Anonyrat says: See table 6 on page 27 of the above. 24. Seth Thatcher says: A Higgs boson walks into a bar….mass exodus. 25. Martin says: Yay, we’ve got a particle! Let’s just hope, it won’t turn into a difraction pattern when we stop talking about it 🙂
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.672292947769165, "perplexity": 1557.765946885135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171271.48/warc/CC-MAIN-20170219104611-00523-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/a-equation-that-im-stuck-on.155866/
# A equation that im stuck on 1. Feb 12, 2007 ### thomas49th 1. The problem statement, all variables and given/known data 2. Relevant equations As in the image above 3. The attempt at a solution I had a go but i rubbed it off the sheet. Can someone take me through this step by step on how to solve it Thanks 2. Feb 12, 2007 ### HallsofIvy Staff Emeritus What is your question? You are told exactly what to do! You are told that the equation must be of the form S= at2+ bt+ c. You are given t and S values for 6 different values of t. Putting the given x and t values into the equation above for 3 different values of t (t= 0, 2, 3, as they suggest, will do but the choice is yours) will give you 3 equations to solve for a, b, and c. (Taking t= 0 will get an especially easy equation!) 3. Feb 12, 2007 ### thomas49th so t=0: 0 = 0 + 0 + 0 (c will have to be 0 if a and b are) t=1: 10 =a+b+c t=2: 30 =4a+2b+c is that right? Last edited: Feb 12, 2007 4. Feb 12, 2007 ### jing but where does it say a and b are 0? t=0: 0 = a.0 + b.0 +c and hence c=0 the other equations are correct 5. Feb 12, 2007 ### thomas49th if t = 0 then a x t = 0 b x t = 0 so c must be 0 if the equation equals 0 6. Feb 12, 2007 ### jing Correct. Now use that you know c=0 in the other two equations 7. Feb 12, 2007 ### thomas49th Huh? I thought the other 2 equations were right? t=0: 0 = 0a + 0b + 0 t=1: 10 =a+b+c t=2: 30 =4a+2b+c cant I right them as that? 8. Feb 12, 2007 ### jing Yes they are correct but you now know that c = 0 so a+b+c=a+b and 4a+2b+c=4a + 2b
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8095256090164185, "perplexity": 1674.3335709230162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00088-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.enotes.com/homework-help/tank-circuit-uses-0-09-uh-inductor-0-4-uf-452356
# A tank circuit uses a 0.09 uH inductor and a 0.4-uF capacitor. The resistance of the inductor is 0.3 ohms. Would the quality of the inductor be 158 and the bandwidth be 5.3? The quality factor Q of a component is by DEFINITION the rapport between the power stored and the power loss in that particular component at resonance. Higher the quality factor is, higher the amplitude of oscillations are. Q = Energy Stored/Energy Loss = Power Stored/Power Loss =Ps/Pl at the resonant frequency of `F_r = 1/(2*pi*sqrt(L*C)) =1/(2*pi*sqrt(9*10^(-8)*4*10^(-7)))=` `=838.82 KHz` If we write the quality factor as `Q = F_r/(Delta(F))` we can find the bandwidth of the circuit as `Delta(F) = F_r/Q` For a SERIES circuit (which is the case here because the  inductor resistance is always considered in series with the inductance) the current I through all components is the same and the stored an loss power are `Ps = I^2*X(L) = I^2*omega*L` `Pl = I^2*R` which gives `Q = (omega_r*L)/R = (2*pi*Fr*L)/R = (2*pi*838820*9*10^(-8))/0.3= 1.5811` The bandwidth of the circuit is `Delta(F) = F_r/Q =838820/1.5811=530516 Hz = 530.5 KHz` For the circuit to have a quality factor Q =150 the values of the components need to be L =0.09 miliH (not microH) C =0.4 microF but in this case the bandwidth will be `Delta(F) = 53 Hz`
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.979353666305542, "perplexity": 1923.1688047280786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00685.warc.gz"}
https://www.lessonplanet.com/teachers/writing-similes-6th-10th
Writing Similes In this writing similes instructional activity, students study the comparison technique as they use similes to 10 items to something else. Students then choose one of the similes to write a short poem. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8826844692230225, "perplexity": 5257.8552853784695}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549448095.6/warc/CC-MAIN-20170728062501-20170728082501-00101.warc.gz"}
http://mathhelpforum.com/pre-calculus/211806-simplifying-not-sure-how-do.html
# Thread: Simplifying.. not sure how to do it.. 1. ## Simplifying.. not sure how to do it.. (-4x^-6 * y^2)^3 to make it easier to read: http://i45.tinypic.com/2pzeeeq.jpg Thanks a lot 2. ## Re: Simplifying.. not sure how to do it.. Hello, phyfreak! $\text{Simplify: }\:\left(-4x^{-6}y^2\right)^3$ $\left(-4x^{-6}y^2\right)^3 \;=\;(-4)^3\left(x^{-6}\right)^3\left(y^2\right)^3 \;=\; -64x^{-18}y^6 \;=\;-\frac{64y^6}{x^{18}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8689597845077515, "perplexity": 12175.255584007553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703334458/warc/CC-MAIN-20130516112214-00042-ip-10-60-113-184.ec2.internal.warc.gz"}
https://worldwidescience.org/topicpages/a/ad+hoc+networks.html
#### Sample records for ad hoc networks John Wiley & Sons 2004-01-01 "Assimilating the most up-to-date information on research and development activities in this rapidly growing area, Mobile Ad Hoc Networking covers physical, data link, network, and transport layers, as well as application, security, simulation, and power management issues in sensor, local area, personal, and mobile ad hoc networks. Each of the book's sixteen chapters has been written by a top expert and discusses in-depth the most important topics in the field. Mobile Ad Hoc Networking is an excellent reference and guide for professionals seeking an in-depth examination of topics that also provides a comprehensive overview of the current state-of-the-art."--Jacket. Rashvand, Habib 2013-01-01 Motivated by the exciting new application paradigm of using amalgamated technologies of the Internet and wireless, the next generation communication networks (also called 'ubiquitous', 'complex' and 'unstructured' networking) are changing the way we develop and apply our future systems and services at home and on local, national and global scales. Whatever the interconnection - a WiMAX enabled networked mobile vehicle, MEMS or nanotechnology enabled distributed sensor systems, Vehicular Ad hoc Networking (VANET) or Mobile Ad hoc Networking (MANET) - all can be classified under new networking s Hong-Chuan Yang 2007-01-01 Full Text Available We study the energy-efficient configuration of multihop paths with automatic repeat request (ARQ mechanism in wireless ad hoc networks. We adopt a cross-layer design approach and take both the quality of each radio hop and the battery capacity of each transmitting node into consideration. Under certain constraints on the maximum tolerable transmission delay and the required packet delivery ratio, we solve optimization problems to jointly schedule the transmitting power of each transmitting node and the retransmission limit over each hop. Numerical results demonstrate that the path configuration methods can either significantly reduce the average energy consumption per packet delivery or considerably extend the average lifetime of the multihop route. 4. Intellectual mobile Ad Hoc networks Sova, Oleg; Romanjuk, Valeriy; Bunin, Sergey; Zhuk, Pavlo 2012-01-01 In this article intellectualization of Mobile Ad Hoc Networks resources management is offered. It was proposed decomposition of the primary goal of MANET functioning into easy subgoals, and fragment of the MANET node target structure is presented. Blanca Alicia Correa 2007-01-01 Full Text Available Los métodos de agrupamiento permiten que las MANET (redes móviles ad hoc presenten un mejor desempeño en cuanto a la rapidez de conexión, el enrutamiento y el manejo de la topología. En este trabajo se presenta una revisión sobre las técnicas de agrupamiento para MANET. Se introducen algunos temas preliminares que forman la base para el desarrollo de los algoritmos de agrupamiento, tales como: la topología de la red, el enrutamiento, la teoría de grafos y los algoritmos de movilidad. Adicionalmente, se describen algunas de las técnicas de agrupamiento más conocidas como Lowest-ID heuristic, Highest degree heuristic, DMAC (distributed mobility-adaptive clustering, WCA (weighted clustering algorithm, entre otros. El propósito central es ilustrar los conceptos principales respecto a las técnicas de agrupamiento en MANET. 6. Service placement in ad hoc networks Wittenburg, Georg 2012-01-01 Service provisioning in ad hoc networks is challenging given the difficulties of communicating over a wireless channel and the potential heterogeneity and mobility of the devices that form the network. Service placement is the process of selecting an optimal set of nodes to host the implementation of a service in light of a given service demand and network topology. The key advantage of active service placement in ad hoc networks is that it allows for the service configuration to be adapted continuously at run time. ""Service Placement in Ad Hoc Networks"" proposes the SPi service placement fr 7. Trust Based Routing in Ad Hoc Network Talati, Mikita V.; Valiveti, Sharada; Kotecha, K. Ad Hoc network often termed as an infrastructure-less, self- organized or spontaneous network.The execution and survival of an ad-hoc network is solely dependent upon the cooperative and trusting nature of its nodes. However, this naive dependency on intermediate nodes makes the ad-hoc network vulnerable to passive and active attacks by malicious nodes and cause inflict severe damage. A number of protocols have been developed to secure ad-hoc networks using cryptographic schemes, but all rely on the presence of trust authority. Due to mobility of nodes and limitation of resources in wireless network one interesting research area in MANET is routing. This paper offers various trust models and trust based routing protocols to improve the trustworthiness of the neighborhood.Thus it helps in selecting the most secure and trustworthy route from the available ones for the data transfer. 8. Multilevel security model for ad hoc networks Wang Changda; Ju Shiguang 2008-01-01 Modern battlefield doctrine is based on mobility, flexibility, and rapid response to changing situations.As is well known, mobile ad hoc network systems are among the best utilities for battlefield activity. Although much research has been done on secure routing, security issues have largely been ignored in applying mobile ad hoc network theory to computer technology. An ad hoc network is usually assumed to be homogeneous, which is an irrational assumption for armies. It is clear that soldiers, commanders, and commanders-in-chief should have different security levels and computation powers as they have access to asymmetric resources. Imitating basic military rank levels in battlefield situations, how multilevel security can be introduced into ad hoc networks is indicated, thereby controlling restricted classified information flows among nodes that have different security levels. 9. Data Confidentiality in Mobile Ad hoc Networks Aldabbas, Hamza; Janicke, Helge; Al-Bayatti, Ali; 10.5121/ijwmn.2012.4117 2012-01-01 Mobile ad hoc networks (MANETs) are self-configuring infrastructure-less networks comprised of mobile nodes that communicate over wireless links without any central control on a peer-to-peer basis. These individual nodes act as routers to forward both their own data and also their neighbours' data by sending and receiving packets to and from other nodes in the network. The relatively easy configuration and the quick deployment make ad hoc networks suitable the emergency situations (such as human or natural disasters) and for military units in enemy territory. Securing data dissemination between these nodes in such networks, however, is a very challenging task. Exposing such information to anyone else other than the intended nodes could cause a privacy and confidentiality breach, particularly in military scenarios. In this paper we present a novel framework to enhance the privacy and data confidentiality in mobile ad hoc networks by attaching the originator policies to the messages as they are sent between nod... 10. Ad hoc networks telecommunications and game theory 2015-01-01 Random SALOHA and CSMA protocols that are used to access MAC in ad hoc networks are very small compared to the multiple and spontaneous use of the transmission channel. So they have low immunity to the problems of packet collisions. Indeed, the transmission time is the critical factor in the operation of such networks. The simulations demonstrate the positive impact of erasure codes on the throughput of the transmission in ad hoc networks. However, the network still suffers from the intermittency and volatility of its efficiency throughout its operation, and it switches quickly to the satura 11. Context discovery in ad-hoc networks Liu, Fei 2011-01-01 Mobile ad-hoc networks (MANETs) are more and more present in our daily life. Such networks are often composed of mobile and battery-supplied devices, like laptops and PDAs. With no requirement for infrastructure support, MANETs can be used as temporary networks, such as for conference and office env 12. Secure Clustering in Vehicular Ad Hoc Networks Zainab Nayyar 2015-09-01 Full Text Available A vehicular Ad-hoc network is composed of moving cars as nodes without any infrastructure. Nodes self-organize to form a network over radio links. Security issues are commonly observed in vehicular ad hoc networks; like authentication and authorization issues. Secure Clustering plays a significant role in VANETs. In recent years, various secure clustering techniques with distinguishing feature have been newly proposed. In order to provide a comprehensive understanding of these techniques are designed for VANETs and pave the way for the further research, a survey of the secure clustering techniques is discussed in detail in this paper. Qualitatively, as a result of highlighting various techniques of secure clustering certain conclusions are drawn which will enhance the availability and security of vehicular ad hoc networks. Nodes present in the clusters will work more efficiently and the message passing within the nodes will also get more authenticated from the cluster heads. 13. Vehicular ad hoc network security and privacy Lin, X 2015-01-01 Unlike any other book in this area, this book provides innovative solutions to security issues, making this book a must read for anyone working with or studying security measures. Vehicular Ad Hoc Network Security and Privacy mainly focuses on security and privacy issues related to vehicular communication systems. It begins with a comprehensive introduction to vehicular ad hoc network and its unique security threats and privacy concerns and then illustrates how to address those challenges in highly dynamic and large size wireless network environments from multiple perspectives. This book is richly illustrated with detailed designs and results for approaching security and privacy threats. 14. Intermittently connected mobile ad hoc networks Jamalipour, Abbas 2011-01-01 In the last few years, there has been extensive research activity in the emerging area of Intermittently Connected Mobile Ad Hoc Networks (ICMANs). By considering the nature of intermittent connectivity in most real word mobile environments without any restrictions placed on users' behavior, ICMANs are eventually formed without any assumption with regard to the existence of a end-to-end path between two nodes wishing to communicate. It is different from the conventional Mobile Ad Hoc Networks (MANETs), which have been implicitly viewed as a connected graph with established complete paths betwe 15. DAWN: Dynamic Ad-hoc Wireless Network 2016-06-19 Wireless Networks, , ( ): . doi: Ning Li, Jennifer C. Hou. Localized Topology Control Algorithms for Heterogeneous Wireless Networks, IEEE ...Multi-User Diversity in Single-Radio OFDMA AdHoc Networks Based on Gibbs Sampling, IEEE Milcom . 03-NOV-10, . : , TOTAL: 1 Number of Peer-Reviewed...Networks, ( ) Hui Xu, , Xianren Wu, , Hamid R. Sadjadpour, , J.J. Garcia-Luna-Aceves, . A Unified Analysis of Routing Protocols inMANETs, IEEE 16. Overlaid Cellular and Mobile Ad Hoc Networks Huang, Kaibin; Chen, Bin; Yang, Xia; Lau, Vincent K N 2008-01-01 In cellular systems using frequency division duplex, growing Internet services cause unbalance of uplink and downlink traffic, resulting in poor uplink spectrum utilization. Addressing this issue, this paper considers overlaying an ad hoc network onto a cellular uplink network for improving spectrum utilization and spatial reuse efficiency. Transmission capacities of the overlaid networks are analyzed, which are defined as the maximum densities of the ad hoc nodes and mobile users under an outage constraint. Using tools from stochastic geometry, the capacity tradeoff curves for the overlaid networks are shown to be linear. Deploying overlaid networks based on frequency separation is proved to achieve higher network capacities than that based on spatial separation. Furthermore, spatial diversity is shown to enhance network capacities. 17. Evolutionary algorithms for mobile ad hoc networks Dorronsoro, Bernabé; Danoy, Grégoire; Pigné, Yoann; Bouvry, Pascal 2014-01-01 Describes how evolutionary algorithms (EAs) can be used to identify, model, and minimize day-to-day problems that arise for researchers in optimization and mobile networking. Mobile ad hoc networks (MANETs), vehicular networks (VANETs), sensor networks (SNs), and hybrid networks—each of these require a designer’s keen sense and knowledge of evolutionary algorithms in order to help with the common issues that plague professionals involved in optimization and mobile networking. This book introduces readers to both mobile ad hoc networks and evolutionary algorithms, presenting basic concepts as well as detailed descriptions of each. It demonstrates how metaheuristics and evolutionary algorithms (EAs) can be used to help provide low-cost operations in the optimization process—allowing designers to put some “intelligence” or sophistication into the design. It also offers efficient and accurate information on dissemination algorithms topology management, and mobility models to address challenges in the ... 18. Constrained Delaunay Triangulation for Ad Hoc Networks D. Satyanarayana 2008-01-01 Full Text Available Geometric spanners can be used for efficient routing in wireless ad hoc networks. Computation of existing spanners for ad hoc networks primarily focused on geometric properties without considering network requirements. In this paper, we propose a new spanner called constrained Delaunay triangulation (CDT which considers both geometric properties and network requirements. The CDT is formed by introducing a small set of constraint edges into local Delaunay triangulation (LDel to reduce the number of hops between nodes in the network graph. We have simulated the CDT using network simulator (ns-2.28 and compared with Gabriel graph (GG, relative neighborhood graph (RNG, local Delaunay triangulation (LDel, and planarized local Delaunay triangulation (PLDel. The simulation results show that the minimum number of hops from source to destination is less than other spanners. We also observed the decrease in delay, jitter, and improvement in throughput. 19. Data Confidentiality in Mobile Ad hoc Networks Hamza Aldabbas 2012-03-01 Full Text Available Mobile ad hoc networks (MANETs are self-configuring infrastructure-less networks comprised of mobile nodes that communicate over wireless links without any central control on a peer-to-peer basis.These individual nodes act as routers to forward both their own data and also their neighbours’ data by sending and receiving packets to and from other nodes in the network. The relatively easy configuration and the quick deployment make ad hoc networks suitable the emergency situations (such as human or natural disasters and for military units in enemy territory. Securing data dissemination between these nodes in such networks, however, is a very challenging task. Exposing such information to anyone else other than the intended nodes could cause a privacy and confidentiality breach, particularly in military scenarios. In this paper we present a novel framework to enhance the privacy and data confidentiality in mobile ad hoc networks by attaching the originator policies to the messages as they are sent between nodes. We evaluate our framework using the Network Simulator (NS-2 to check whether the privacy and confidentiality of the originator are met. For this we implemented the Policy Enforcement Points (PEPs, as NS-2 agents that manage and enforce the policies attached to packets at every node in the MANET. 王海涛 2005-01-01 1. Transmission Strategies in MIMO Ad Hoc Networks Fakih Khalil 2009-01-01 Full Text Available Abstract Precoding problem in multiple-input multiple-output (MIMO ad hoc networks is addressed in this work. Firstly, we consider the problem of maximizing the system mutual information under a power constraint. In this context, we give a brief overview of the nonlinear optimization methods, and systematically we compare their performances. Then, we propose a fast and distributed algorithm based on the quasi-Newton methods to give a lower bound of the system capacity of MIMO ad hoc networks. Our proposed algorithm solves the maximization problem while diminishing the amount of information in the feedback links needed in the cooperative optimization. Secondly, we propose a different problem formulation, which consists in minimizing the total transmit power under a quality of signal constraint. This novel problem design is motivated since the packets are captured in ad hoc networks based on their signal-to-interference-plus-noise ratio (SINR values. We convert the proposed formulation into semidefinite optimization problem, which can be solved numerically using interior point methods. Finally, an extensive set of simulations validates the proposed algorithms. 2. Mobility Prediction in Wireless Ad Hoc Networks using Neural Networks Kaaniche, Heni 2010-01-01 Mobility prediction allows estimating the stability of paths in a mobile wireless Ad Hoc networks. Identifying stable paths helps to improve routing by reducing the overhead and the number of connection interruptions. In this paper, we introduce a neural network based method for mobility prediction in Ad Hoc networks. This method consists of a multi-layer and recurrent neural network using back propagation through time algorithm for training. V. Balaji 2011-01-01 Full Text Available Problem statement: An inherent feature of mobile ad hoc networks is the frequent change of network topology leading to stability and reliability problems of the network. Highly dynamic and dense network have to maintain acceptable level of service to data packets and limit the network control overheads. This capability is closely related as how quickly the network protocol control overhead is managed as a function of increased link changes. Dynamically limiting the routing control overheads based on the network topology improves the throughput of the network. Approach: In this study we propose Varying Overhead - Ad hoc on Demand Vector routing protocol (VO-AODV for highly dynamic mobile Ad hoc network. The VO-AODV routing protocol proposed dynamically modifies the active route time based on the network topology. Results and Conclusion: Simulation results prove that the proposed model decreases the control overheads without decreasing the QOS of the network. 4. Intrusion detection in wireless ad-hoc networks Chaki, Nabendu 2014-01-01 Presenting cutting-edge research, Intrusion Detection in Wireless Ad-Hoc Networks explores the security aspects of the basic categories of wireless ad-hoc networks and related application areas. Focusing on intrusion detection systems (IDSs), it explains how to establish security solutions for the range of wireless networks, including mobile ad-hoc networks, hybrid wireless networks, and sensor networks.This edited volume reviews and analyzes state-of-the-art IDSs for various wireless ad-hoc networks. It includes case studies on honesty-based intrusion detection systems, cluster oriented-based 5. Fundamental Properties of Wireless Mobile Ad-hoc Networks Hekmat, R. 2005-01-01 Wireless mobile ad-hoc networks are formed by mobile devices that set up a possibly short-lived network for communication needs of the moment. Ad-hoc networks are decentralized, self-organizing networks capable of forming a communication network without relying on any fixed infrastructure. Each nod 6. Spontaneous ad hoc mobile cloud computing network. Lacuesta, Raquel; Lloret, Jaime; Sendra, Sandra; Peñalver, Lourdes 2014-01-01 Cloud computing helps users and companies to share computing resources instead of having local servers or personal devices to handle the applications. Smart devices are becoming one of the main information processing devices. Their computing features are reaching levels that let them create a mobile cloud computing network. But sometimes they are not able to create it and collaborate actively in the cloud because it is difficult for them to build easily a spontaneous network and configure its parameters. For this reason, in this paper, we are going to present the design and deployment of a spontaneous ad hoc mobile cloud computing network. In order to perform it, we have developed a trusted algorithm that is able to manage the activity of the nodes when they join and leave the network. The paper shows the network procedures and classes that have been designed. Our simulation results using Castalia show that our proposal presents a good efficiency and network performance even by using high number of nodes. 7. Ad hoc mobile wireless networks principles, protocols, and applications Sarkar, Subir Kumar 2013-01-01 The military, the research community, emergency services, and industrial environments all rely on ad hoc mobile wireless networks because of their simple infrastructure and minimal central administration. Now in its second edition, Ad Hoc Mobile Wireless Networks: Principles, Protocols, and Applications explains the concepts, mechanism, design, and performance of these highly valued systems. Following an overview of wireless network fundamentals, the book explores MAC layer, routing, multicast, and transport layer protocols for ad hoc mobile wireless networks. Next, it examines quality of serv 8. Ad hoc mobile wireless networks principles, protocols and applications 2007-01-01 Ad hoc mobile wireless networks have seen increased adaptation in a variety of disciplines because they can be deployed with simple infrastructures and virtually no central administration. In particular, the development of ad hoc wireless and sensor networks provides tremendous opportunities in areas including disaster recovery, defense, health care, and industrial environments. Ad Hoc Mobile Wireless Networks: Principles, Protocols and Applications explains the concepts, mechanisms, design, and performance of these systems. It presents in-depth explanations of the latest wireless technologies 9. ITMAN: An Inter Tactical Mobile Ad Hoc Network Routing Protocol Grandhomme, Florian; Guette, Gilles; Ksentini, Adlen; Plesse, Thierry 2016-01-01 International audience; New generation radio equipment, used by soldiers and vehicles on the battlefield, constitute ad hoc networks and specifically, Mobile Ad hoc NETworks (MANET). The battlefield where these equipment are deployed includes a majority of coalition communication. Each group on the battleground may communicate with other members of the coalition and establish inter-MANETs links. Operational communications tend to provide tactical ad hoc networks some capacities. There is a be... 10. High Secure Fingerprint Authentication in Ad hoc Network P.Velayutham 2010-01-01 In this paper, the methodology proposed is an novel robust approach on secure fingerprint authentication and matching techniques to implement in ad-hoc wireless networks. This is a difficult problem in ad-hoc network, as it involves bootstrapping trust between the devices. This journal would present a solution, which providesfingerprint authentication techniques to share their communication in ad-hoc network. In this approach, devices exchange a corresponding fingerprint with master device fo... 11. CAPACITY EVALUATION OF MULTI-CHANNEL WIRELESS AD HOC NETWORKS Li Jiandong; Zygmunt J. Haas; Min Sheng 2003-01-01 In this paper, the capacity of multi-channel, multi-hop ad hoc network is evaluated.In particular, the performance of multi-hop ad hoc network with single channel IEEE 802.11MAC utilizing different topologies is shown. Also the scaling laws of throughputs for large-scale ad hoc networks and the theoretical guaranteed throughput bounds for multi-channel gridtopology systems are proposed. The results presented in this work will help researchers to choosethe proper parameter settings in evaluation of protocols for multi-hop ad hoc networks. 阿姆贾德 2002-01-01 A novel mechanism was specified by which a node in ad hoc network may autoconfigure an IP address which is unique throughout the mobile ad hoc network. This new algorithm imposes less and constant overhead and delay in obtaining an IP address, and fully utilizes the available addresses space of an ad hoc network, and independent of the existing routing protocol, and less prone to security threats. Moreover, a new Join/Leave mechanism was proposed as an enhancement to the new IP address autoconfiguration algorithm, to support the overall operation of the existing routing protocol of wireless ad hoc networks. 13. Vehicular Ad Hoc Network Mobility Model 2014-01-01 Full Text Available Indonesia is one of developing countries with high land traffic density. This traffic density could cause traffic jam, traffic accidents and other disturbances. This research had developed a simulator that could calculate the traffic density of roads in urban areas. With the use of this simulator, the researcher could calculate the time needed if the source node transports the message to the destination node by using the ad hoc network communication facility. In this research, every vehicle utilizes multi-hop communication in a communication network. The vehicle sends the message through flooding message and passes on the received message to other vehicles. Based on the simulation done on map size 10 km x 10 km with a total of 20 vehicles on the road, it was calculated that the simulator could transmit the message to its destination on the 106th second from node 3 and with the total of 200 vehicles on the road, the simulator could transmit the message to its destination on the 22nd second from node 5. 14. A NOVEL ROUTING ATTACK IN MOBILE AD HOC NETWORKS DR. N. SATYANARAYANA 2010-12-01 15. Modeling Terrain Impact on Mobile Ad Hoc Networks (MANET) Connectivity 2014-05-01 Modeling Terrain Impact on Mobile Ad Hoc Networks ( MANET ) Connectivity Lance Joneckis Corinne Kramer David Sparrow David Tate I N S T I T U T E F...SUBTITLE Modeling Terrain Impact on Mobile Ad Hoc Networks ( MANET ) Connectivity 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...1882 [email protected] Abstract—Terrain affects connectivity in mobile ad hoc net- works ( MANET ). Both average pairwise link closure and the rate 16. Multihost ad-hoc network with the clustered Security networks J.Manikandan, 2010-03-01 Full Text Available Security has becomes a primary concern in order to provide protected communication between mobile nodes in a host environment .Unlike the wire line network, the unique characteristics mobile ad-hoc networkpose a collection on autonomous nodes of terminals. Which ommunication with each other by forming multihost radio network and maintaining connectivity in a decentralized manner. Node in Ad-hoc network path is dynamic network topology. These challenges clearly make a case for building multifence security selection that achieve both protection and describe network performance. In this paper we focus on the fundamental security of protection. the multihost network connectivity between mobile nodes in a MANET.we Identify thesecurity issues related to this problem, disuse the challenges to security design and review the security proposals the protect multihost wireless networks. Some security mechanism used in wired network cannot simply is applied to protocol an ad-hoc network. After analyzing various type attacks ad-hoc network, a security for thefamous routing protocol, DSR (Dynamic sources routing is proposed the complete security solutions should cluster nodes and MANET encompass the security components of prevention, detection and reactions. 17. Decentralized Network-level Synchronization in Mobile Ad Hoc Networks Voulgaris, Spyros; Dobson, Matthew; Steen, van Maarten 2016-01-01 Energy is the scarcest resource in ad hoc wireless networks, particularly in wireless sensor networks requiring a long lifetime. Intermittently switching the radio on and off is widely adopted as the most effective way to keep energy consumption low. This, however, prevents the very goal of communic 18. Flooding attack and defence in Ad hoc networks Yi Ping; Hou Yafei; Zhong Yiping; Zhang Shiyong; Dai Zhoulin 2006-01-01 Mobile ad hoc networks are particularly vulnerable to denial of service (DOS) attacks launched through compromised nodes or intruders. In this paper, we present a new DOS attack and its defense in ad hoc networks. The new DOS attack, called Ad hoc Flooding Attack(AHFA), is that intruder broadcasts mass Route Request packets to exhaust the communication bandwidth and node resource so that the valid communication can not be kept. After analyzed Ad hoc Flooding Attack, we develop Flooding Attack Prevention (FAP), a generic defense against the Ad hoc Flooding Attack. When the intruder broadcasts exceeding packets of Route Request, the immediate neighbors of the intruder record the rate of Route Request. Once the threshold is exceeded, nodes deny any future request packets from the intruder. The results of our implementation show FAP can prevent the Ad hoc Flooding attack efficiently. 19. Enhanced Weight based DSR for Mobile Ad Hoc Networks Verma, Samant; Jain, Sweta 2011-12-01 Routing in ad hoc network is a great problematic, since a good routing protocol must ensure fast and efficient packet forwarding, which isn't evident in ad hoc networks. In literature there exists lot of routing protocols however they don't include all the aspects of ad hoc networks as mobility, device and medium constraints which make these protocols not efficient for some configuration and categories of ad hoc networks. Thus in this paper we propose an improvement of Weight Based DSR in order to include some of the aspects of ad hoc networks as stability, remaining battery power, load and trust factor and proposing a new approach Enhanced Weight Based DSR. Jalil Piran, Mohammad; Cho, Yongwoo; Yun, Jihyeok; Ali, Amjad; Suh, Doug Young 2014-01-01 ... the spectrum scarcity issue. We have already proposed vehicular ad hoc and sensor networks (VASNET) as a new networking paradigm for vehicular communication by utilizing wireless sensor nodes in two mobile and stationary modes... 1. A Performance Comparison of Routing Protocols for Ad Hoc Networks Hicham Zougagh 2014-09-01 Full Text Available Mobile Ad hoc Network (MANET is a collection of mobile nodes in which the wireless links are frequently broken down due to mobility and dynamic infrastructure. Routing is a significant issue and challenge in ad hoc networks. Many routing protocols have been proposed like OLSR, AODV so far to improve the routing performance and reliability. In this paper, we describe the Optimized Link State Routing Protocol (OLSR and the Ad hoc On-Demand Distance Vector (AODV. We evaluate their performance through exhaustive simulations using the Network Simulator 2 (ns2 by varying conditions (node mobility, network density. Song, Yi 2014-01-01 3. A Discovery Process for Initializing Ad Hoc Underwater Acoustic Networks 2008-12-01 1 The route establishment process is very similar to Ad hoc On-Demand Distance Vector ( AODV ) routing protocol except that AODV does not...research in network routing protocols . 3 Chapter III provides an overview of the challenges posed by the physical ocean medium on acoustic communications...network routing strategies. A. RELATED WORK While discovery and routing protocols for terrestrial ad hoc and wireless sensor networks have been 4. Gossip Based Routing Protocol Design for Ad Hoc Networks Toqeer Mahmood; Tabbassam Nawaz; Rehan Ashraf; Syed M. Adnan Shah 2012-01-01 A spontaneously mannered decentralized network with no formal infrastructure and limited in temporal and spatial extent where each node communicate with each other over a wireless channel and is willing to forward data for other nodes is called as Wireless Ad Hoc network. In this research study, we proposed a routing strategy based on gossip based routing approach that follows the proactive routing with some treatment for wireless Ad Hoc network. The analytical verification of our proposed id... 5. Routing in Highly Dynamic Ad Hoc Networks: Issues and Challenges Varun G Menon 2016-04-01 Full Text Available The aim of this research paper is to analyze the various issues and challenges involved in routing of data packets in highly mobile ad hoc networks. Routing in ad hoc networks has always been a challenging and tough task due to the dynamic topology and error prone wireless channel. There are a number of issues like lack of centralized control, constantly moving nodes etc that has to be considered while routing a data packet from the source to the destination in the ad hoc network. Routing of data packets becomes much more difficult with increased mobility of nodes. This paper analyses the various issues in routing of data packets from the source to the destination node and also lists down the parameters that has to be considered while designing and selecting a routing protocol for highly mobile ad hoc networks. Sun Baolin; Li Layuan 2006-01-01 In wireless ad hoc network environments, every link is wireless and every node is mobile. Those features make data lost easily as well as multicasting inefficient and unreliable. Moreover, Efficient and reliable multicast in wireless ad hoc network is a difficult issue. It is a major challenge to transmission delays and packet losses due to link changes of a multicast tree at the provision of high delivery ratio for each packet transmission in wireless ad hoc network environment.In this paper, we propose and evaluate Reliable Adaptive Multicast Protocol (RAMP) based on a relay node concept. Relay nodes are placed along the multicast tree. Data recovery is done between relay nodes. RAMP supports a reliable multicasting suitable for mobile ad hoc network by reducing the number of packet retransmissions. We compare RAMP with SRM (Scalable Reliable Multicast). Simulation results show that the RAMP has high delivery ratio and low end-to-end delay for packet transmission. 7. An Efficient Proactive RSA Scheme for Ad Hoc Networks ZHANG Rui-shan; CHEN Ke-fei 2007-01-01 A proactive threshold signature scheme is very important to tolerate mobile attack in mobile ad hoc networks. In this paper, we propose an efficient proactive threshold RSA signature scheme for ad hoc networks. The scheme consists of three protocols: the initial secret share distribution protocol, the signature generation protocol and the secret share refreshing protocol. Our scheme has three advantages. First, the signature generation protocol is efficient. Second, the signature generation protocol is resilient. Third, the share refreshing protocol is efficient. 8. Ad Hoc Mobile Wireless Networks Routing Protocols - A Review Geetha Jayakumar 2007-01-01 Full Text Available Mobile ad hoc networks(MANET represent complex distributed systems that comprise wireless mobile nodes that can freely and dynamically self organize into arbitrary and temporary ad-hoc network topologies, allowing people and devices to seamlessly internet work in areas with no preexisting communication infrastructure e.g., disaster recovery environments. An ad-hoc network is not a new one, having been around in various forms for over 20 years. Traditionally, tactical networks have been the only communication networking application that followed the ad-hoc paradigm. Recently the introduction of new technologies such as Bluetooth, IEEE 802.11 and hyperlan are helping enable eventual commercial MANET deployments outside the military domain. These recent revolutions have been generating a renewed and growing interest in the research and development of MANET. To facilitate communication within the network a routing protocol is used to discover routes between nodes. The goal of the routing protocol is to have an efficient route establishment between a pair of nodes, so that messages can be delivered in a timely manner. Bandwidth and power constraints are the important factors to be considered in current wireless network because multi-hop ad-hoc wireless relies on each node in the network to act as a router and packet forwarder. This dependency places bandwidth, power computation demands on mobile host to be taken into account while choosing the protocol. Routing protocols used in wired network cannot be used for mobile ad-hoc networks because of node mobility. The ad-hoc routing protocols are divided into two classes: table driven and demand based. This paper reviews and discusses routing protocol belonging to each category. 9. Performance Evaluation of Important Ad Hoc Network Protocols 2006-01-01 Full Text Available A wireless ad hoc network is a collection of specific infrastructureless mobile nodes forming a temporary network without any centralized administration. A user can move anytime in an ad hoc scenario and, as a result, such a network needs to have routing protocols which can adopt dynamically changing topology. To accomplish this, a number of ad hoc routing protocols have been proposed and implemented, which include dynamic source routing (DSR, ad hoc on-demand distance vector (AODV routing, and temporally ordered routing algorithm (TORA. Although considerable amount of simulation work has been done to measure the performance of these routing protocols, due to the constant changing nature of these protocols, a new performance evaluation is essential. Accordingly, in this paper, we analyze the performance differentials to compare the above-mentioned commonly used ad hoc network routing protocols. We also analyzed the performance over varying loads for each of these protocols using OPNET Modeler 10.5. Our findings show that for specific differentials, TORA shows better performance over the two on-demand protocols, that is, DSR and AODV. Our findings are expected to lead to further performance improvements of various ad hoc networks in the future. 10. Multiagent Based Information Dissemination in Vehicular Ad Hoc Networks S.S. Manvi 2009-01-01 Full Text Available Vehicular Ad hoc Networks (VANETs are a compelling application of ad hoc networks, because of the potential to access specific context information (e.g. traffic conditions, service updates, route planning and deliver multimedia services (Voice over IP, in-car entertainment, instant messaging, etc.. This paper proposes an agent based information dissemination model for VANETs. A two-tier agent architecture is employed comprising of the following: 1 'lightweight', network-facing, mobile agents; 2 'heavyweight', application-facing, norm-aware agents. The limitations of VANETs lead us to consider a hybrid wireless network architecture that includes Wireless LAN/Cellular and ad hoc networking for analyzing the proposed model. The proposed model provides flexibility, adaptability and maintainability for traffic information dissemination in VANETs as well as supports robust and agile network management. The proposed model has been simulated in various network scenarios to evaluate the effectiveness of the approach. 11. Auto-configuration protocols in mobile ad hoc networks. Villalba, Luis Javier García; Matesanz, Julián García; Orozco, Ana Lucila Sandoval; Díaz, José Duván Márquez 2011-01-01 The TCP/IP protocol allows the different nodes in a network to communicate by associating a different IP address to each node. In wired or wireless networks with infrastructure, we have a server or node acting as such which correctly assigns IP addresses, but in mobile ad hoc networks there is no such centralized entity capable of carrying out this function. Therefore, a protocol is needed to perform the network configuration automatically and in a dynamic way, which will use all nodes in the network (or part thereof) as if they were servers that manage IP addresses. This article reviews the major proposed auto-configuration protocols for mobile ad hoc networks, with particular emphasis on one of the most recent: D2HCP. This work also includes a comparison of auto-configuration protocols for mobile ad hoc networks by specifying the most relevant metrics, such as a guarantee of uniqueness, overhead, latency, dependency on the routing protocol and uniformity. 12. Survey on Security Issues in Vehicular Ad Hoc Networks Bassem Mokhtar 2015-12-01 Full Text Available Vehicular Ad hoc NETworks are special case of ad hoc networks that, besides lacking infrastructure, communicating entities move with various accelerations. Accordingly, this impedes establishing reliable end-to-end communication paths and having efficient data transfer. Thus, VANETs have different network concerns and security challenges to get the availability of ubiquitous connectivity, secure communications, and reputation management systems which affect the trust in cooperation and negotiation between mobile networking entities. In this survey, we discuss the security features, challenges, and attacks of VANETs, and we classify the security attacks of VANETs due to the different network layers. 13. Routing Protocol for Mobile Ad-hoc Wireless Networks I. M. B. Nogales 2007-09-01 Full Text Available Bluetooth is a cutting-edge technology used for implementing wireless ad hoc networks. In order to provide an overall scheme for mobile ad hoc networks, this paper deals with scatternet topology formation and routing algorithm to form larger ad hoc wireless Networks. Scatternet topology starts by forming a robust network, which is less susceptible to the problems posed by node mobility. Mobile topology relies on the presence of free nodes that create multiple connections with the network and on their subsequently rejoining the network. Our routing protocol is a proactive routing protocol, which is tailor made for the Bluetooth ad hoc network. The connection establishment connects nodes in a structure that simplifies packet routing and scheduling. The design allows nodes to arrive and leave arbitrarily, incrementally building the topology and healing partitions when they occur. We present simulation results that show that the algorithm presents low formation latency and also generates an efficient topology for forwarding packets along ad-hoc wireless networks. 2016-01-01 Decreasing the route rediscovery time process in reactive routing protocols is challenging in mobile ad hoc networks. Links between nodes are continuously established and broken because of the characteristics of the network. Finding multiple routes to increase the reliability is also important but requires a fast update, especially in high traffic load and high mobility where paths can be broken as well. The sender node keeps re-establishing path discovery to find new paths, which makes for long time delay. In this paper we propose an improved multipath routing protocol, called Receiver-based ad hoc on demand multipath routing protocol (RB-AOMDV), which takes advantage of the reliability of the state of the art ad hoc on demand multipath distance vector (AOMDV) protocol with less re-established discovery time. The receiver node assumes the role of discovering paths when finding data packets that have not been received after a period of time. Simulation results show the delay and delivery ratio performances are improved compared with AOMDV. PMID:27258013 Abdulaziz Al-Nahari Full Text Available Decreasing the route rediscovery time process in reactive routing protocols is challenging in mobile ad hoc networks. Links between nodes are continuously established and broken because of the characteristics of the network. Finding multiple routes to increase the reliability is also important but requires a fast update, especially in high traffic load and high mobility where paths can be broken as well. The sender node keeps re-establishing path discovery to find new paths, which makes for long time delay. In this paper we propose an improved multipath routing protocol, called Receiver-based ad hoc on demand multipath routing protocol (RB-AOMDV, which takes advantage of the reliability of the state of the art ad hoc on demand multipath distance vector (AOMDV protocol with less re-established discovery time. The receiver node assumes the role of discovering paths when finding data packets that have not been received after a period of time. Simulation results show the delay and delivery ratio performances are improved compared with AOMDV. 2016-01-01 Decreasing the route rediscovery time process in reactive routing protocols is challenging in mobile ad hoc networks. Links between nodes are continuously established and broken because of the characteristics of the network. Finding multiple routes to increase the reliability is also important but requires a fast update, especially in high traffic load and high mobility where paths can be broken as well. The sender node keeps re-establishing path discovery to find new paths, which makes for long time delay. In this paper we propose an improved multipath routing protocol, called Receiver-based ad hoc on demand multipath routing protocol (RB-AOMDV), which takes advantage of the reliability of the state of the art ad hoc on demand multipath distance vector (AOMDV) protocol with less re-established discovery time. The receiver node assumes the role of discovering paths when finding data packets that have not been received after a period of time. Simulation results show the delay and delivery ratio performances are improved compared with AOMDV. 17. Robust message routing for mobile (wireless) ad hoc networks. Goldsby, Michael E.; Johnson, Michael M.; Kilman, Dominique Marie (Sandia National Laboratories, Albuquerque, NM); Bierbaum, Neal Robert; Chen, Helen Y.; Ammerlahn, Heidi R.; Tsang, Rose P.; Nicol, David M. (University of Illinois, Urbana, IL) 2004-01-01 This report describes the results of research targeting improvements in the robustness of message transport in wireless ad hoc networks. The first section of the report provides an analysis of throughput and latency in the wireless medium access control (MAC) layer and relates the analysis to the commonly used 802.11 protocol. The second section describes enhancements made to several existing models of wireless MAC and ad hoc routing protocols; the models were used in support of the work described in the following section. The third section of the report presents a lightweight transport layer protocol that is superior to TCP for use in wireless networks. In addition, it introduces techniques that improve the performance of any ad hoc source routing protocol. The fourth section presents a novel, highly scalable ad hoc routing protocol that is based on geographic principles but requires no localization hardware. Sun Xuebin; Zhou Zheng 2003-01-01 Ad Hoc networks are prone to link failures due to mobility. In this letter, a link perdurability based routing scheme is proposed to try dealing with this problem. This scheme uses signal strength measurements to estimate the route life time and hence chooses a stable route, and is implemented in two typical ad hoc routing protocols to evaluate its performance. The simulation results have shown that this scheme can improve these protocols' packet delivery ratio in cases where there are frequent link failures. 19. Innovative research of AD HOC network mobility model Chen, Xin 2017-08-01 It is difficult for researchers of AD HOC network to conduct actual deployment during experimental stage as the network topology is changeable and location of nodes is unfixed. Thus simulation still remains the main research method of the network. Mobility model is an important component of AD HOC network simulation. It is used to describe the movement pattern of nodes in AD HOC network (including location and velocity, etc.) and decides the movement trail of nodes, playing as the abstraction of the movement modes of nodes. Therefore, mobility model which simulates node movement is an important foundation for simulation research. In AD HOC network research, mobility model shall reflect the movement law of nodes as truly as possible. In this paper, node generally refers to the wireless equipment people carry. The main research contents include how nodes avoid obstacles during movement process and the impacts of obstacles on the mutual relation among nodes, based on which a Node Self Avoiding Obstacle, i.e. NASO model is established in AD HOC network. 20. A Study On OFDM In Mobile Ad Hoc Network Malik Nasereldin Ahmed 2012-06-01 Full Text Available Orthogonal Frequency Division Multiplexing (OFDM is the physical layer in emerging wireless local area networks that are also being targeted for ad hoc networking. OFDM can be also exploited in ad hoc networks to improve the energy performance of mobile devices. It is important in wireless networks because it can be used adaptively in a dynamically changing channel. This study gives a detailed view about OFDM and how it is useful to increase the bandwidth. This paper also gives an idea about how OFDM can be a promising technology for high capacity wireless communication. 1. A Survey of Mobile Ad Hoc Network Attacks 2010-09-01 Full Text Available Security is an essential requirement in mobile ad hoc network (MANETs. Compared to wired networks, MANETs are more vulnerable to security attacks due to the lack of a trusted centralized authority and limited resources. Attacks on ad hoc networks can be classified as passive and active attacks, depending on whether the normal operation of the network is disrupted or not. In this paper, we are describing the all prominent attacks described in literature in a consistent manner to provide a concise comparison on attack types. To the best of our knowledge, this is the first paper that studies all the existing attacks on MANETs. 2. Dynamic Mobile IP routers in ad hoc networks Kock, B.A.; Schmidt, J.R. 2005-01-01 This paper describes a concept combining mobile IP and ad hoc routing to create a robust mobile network. In this network all nodes are mobile and globally and locally reachable under the same IP address. Essential for implementing this network are the dynamic mobile IP routers. They act as gateways 3. A survey of TCP over ad hoc networks Al Hanbali, Ahmad; Altman, Eitan; Nain, Philippe 2005-01-01 The Transmission Control Protocol (TCP) was designed to provide reliable end-to-end delivery of data over unreliable networks. In practice, most TCP deployments have been carefully designed in the context of wired networks. Ignoring the properties of wireless ad hoc networks can lead to TCP implemen 4. Dynamic Mobile IP routers in ad hoc networks Kock, B.A.; Schmidt, J.R. 2005-01-01 This paper describes a concept combining mobile IP and ad hoc routing to create a robust mobile network. In this network all nodes are mobile and globally and locally reachable under the same IP address. Essential for implementing this network are the dynamic mobile IP routers. They act as gateways 5. ANALYSIS OF ROUTING IN MOBILE AD-HOC NETWORKS Shweta 2012-09-01 Full Text Available An ad-hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any standalone infrastructure or centralized administration. Routing in Ad hoc networks is a challenging problem because nodes are mobile and links are continuously being created and broken.In this model we not only improves the reputation of the network but also provide a routing approach for reliable data transmission and also avoid the loop occurs in the communication. The mobile network is the dynamicnetwork that provides the solution for the inclusion and exclusion of dynamic nodes in the network. AODV and DSR are the two most popular routing protocols for ad-hoc network that we discussed here. In this paper we describe the way to find the node having packet loss and to perform the elimination of node from the network withoutperforming the rerouting and provide the reliable data transfer over the network. In this paper, we design and evaluate cooperative caching techniques to efficiently support data access in the ad-hoc network. 6. Reliable routing algorithm in marine ad hoc networks LIN Wei; YANG Yong-Tian 2004-01-01 A routing algorithm called DNH for increasing efficiency of mobile ad hoc network is presented, which is based on a new criterion called TSS ( Temporarily Steady State) , combining with wireless transmission theory that makes networks topology correspondingly stabilization. Also the DNH algorithm has the characteristics of giving up queuing in a node, but selecting another idle node instead of the node for forwarding data packets if the node has a full throughput. Simulation evaluation shows that selecting another node is better than queuing in a full throughout node if some conditions are satisfied, especially during a sea battle, every warship in ad hoc network wants to contest time and increase propagation reliability. The DNH algorithm can help decrease routing time, and raise efficiency of marine ad hoc networks. 7. Mobile Ad Hoc Networks Current Status and Future Trends Loo, Jonathan 2011-01-01 Guiding readers through the basics of these rapidly emerging networks to more advanced concepts and future expectations, Mobile Ad hoc Networks: Current Status and Future Trends identifies and examines the most pressing research issues in Mobile Ad hoc Networks (MANETs). Containing the contributions of leading researchers, industry professionals, and academics, this forward-looking reference provides an authoritative perspective of the state of the art in MANETs. The book includes surveys of recent publications that investigate key areas of interest such as limited resources and the mobility o 8. Voice Service Support in Mobile Ad Hoc Networks Jiang, Hai; Poor, H Vincent; Zhuang, Weihua 2007-01-01 Mobile ad hoc networks are expected to support voice traffic. The requirement for small delay and jitter of voice traffic poses a significant challenge for medium access control (MAC) in such networks. User mobility makes it more complex due to the associated dynamic path attenuation. In this paper, a MAC scheme for mobile ad hoc networks supporting voice traffic is proposed. With the aid of a low-power probe prior to DATA transmissions, resource reservation is achieved in a distributed manner, thus leading to small delay and jitter. The proposed scheme can automatically adapt to dynamic path attenuation in a mobile environment. Simulation results demonstrate the effectiveness of the proposed scheme. 9. Gossip Based Routing Protocol Design for Ad Hoc Networks Toqeer Mahmood 2012-01-01 Full Text Available A spontaneously mannered decentralized network with no formal infrastructure and limited in temporal and spatial extent where each node communicate with each other over a wireless channel and is willing to forward data for other nodes is called as Wireless Ad Hoc network. In this research study, we proposed a routing strategy based on gossip based routing approach that follows the proactive routing with some treatment for wireless Ad Hoc network. The analytical verification of our proposed idea shows that it is a better approach based on gossip routing. 10. Performance Comparison of Routing Protocols in Mobile Ad Hoc Networks Er. Rakesh Kumar, 2010-08-01 Full Text Available hoc networks are self configuring network and by a random and quickly changing network topology; thus the need for a robust dynamic routing protocol can accommodate such an environment. Different protocols govern the mobile ad hoc networks and to improve the packet delivery ratio of Destination-Sequenced Distance Vector (DSDV routing protocol in mobile ad hoc networks with high mobility, a message exchange scheme for its invalid route reconstruction is being used. Three protocols AODV, DSDV and I-DSDV were simulated using NS-2 package and were compared in terms of packet delivery ratio, end to end delay routing overhead in different environment; varying number of nodes, speed and pause time. Simulation results show that IDSDVcompared with DSDV, it reduces the number of dropped data packets with little increased overhead at higher rates of node mobility but still compete with AODV in higher node speed and number of node. 11. Distribution of Information in Ad Hoc Networks 2007-09-01 2.4. MACA Protocol ...................................20 Figure 2.5. Route discovery in AODV (From [32]).............28 Figure 2.6. Creation of a...2.5. Route discovery in AODV (From [32]) The DSR (Dynamic Source Routing ) protocol [33], proposed by Johnson and Maltz, is very similar to AODV ...Hoc On-Demand Distance Vector Routing ( AODV ) [30] and Dynamic Source Routing (DSR) [48] version unicast are reactive protocols (the paths are built 12. Comparative Analysis of Routing Attacks in Ad Hoc Network Bipul Syam Purkayastha 2012-03-01 Full Text Available In the mobile ad hoc networks the major role is played by the routing protocols in order to route the data from one mobile node to another mobile node. But in such mobile networks, routing protocols are vulnerable to various kinds of security attacks such as blackhole node attacks. The routing protocols of MANET are unprotected and hence resulted into the network with the malicious mobile nodes in the network. These malicious nodes in the network are basically acts as attacks in the network. In this paper, we modify the existing DSR protocol with the functionality of attacks detection without affecting overall performance of the network. Also, we are considering the various attacks on mobile ad hoc network called blackhole attack, flooding attack and show the comparative analysis of these attacks using network simulator ns-2. Schmidt, Ricardo de O.; Pras, Aiko; Gomes, Reinaldo 2011-01-01 Ad-hoc networks are supposed to operate autonomously and, therefore, self-* technologies are fundamental to their deployment. Many of these so- lutions have been proposed during the last few years, covering several layers and functionalities of network- ing systems. Addressing can be considered as o 14. Neighbor-Aware Control in Ad Hoc Networks 2002-12-01 Neighbor-Aware Contention Resolution Channel access problem in ad hoc networks is a special dynamic leader - election prob- lem in which multiple leaders are...message coordinations. However, the network throughput may drastically degrade when the leader election becomes increasingly competitive due to the 15. Wireless sensor and ad hoc networks under diversified network scenarios Sarkar, Subir Kumar 2012-01-01 Due to significant advantages, including convenience, efficiency and cost-effectiveness, the implementation and use of wireless ad hoc and sensor networks have gained steep growth in recent years. This timely book presents the current state-of-the-art in these popular technologies, providing you with expert guidance for your projects in the field. You find broad-ranging coverage of important concepts and methods, definitions of key terminology, and a look at the direction of future research. Supported with nearly 150 illustrations, the book discusses a variety of critical topics, from topology 16. The 11th Annual Mediterranean Ad Hoc Networking Workshop (Med-Hoc-Net 2012 ) Pitsillides, A.; Douligeris, C.; Vassiliou, V.; Heijenk, Gerhard J.; Cavalcante de Oliveira, J.; Unknown, [Unknown 2012-01-01 Message from the General Chairs Welcome to the 2012 Mediterranean Ad Hoc Networking Workshop in Ayia Napa, Cyprus. We are excited to host Med-Hoc-Net. As a major annual international workshop, following recent successful workshops in Sicily (2006), Corfu (2007), Palma de Mallorca (2008), Haifa (2009 17. The 11th Annual Mediterranean Ad Hoc Networking Workshop (Med-Hoc-Net 2012 ) Pitsillides, A.; Douligeris, C.; Vassiliou, V.; Heijenk, Geert; Cavalcante de Oliveira, J. Message from the General Chairs Welcome to the 2012 Mediterranean Ad Hoc Networking Workshop in Ayia Napa, Cyprus. We are excited to host Med-Hoc-Net. As a major annual international workshop, following recent successful workshops in Sicily (2006), Corfu (2007), Palma de Mallorca (2008), Haifa 18. Performance Evaluation of TCP over Mobile Ad hoc Networks Ahmed, Foez; Islam, Nayeema; Debnath, Sumon Kumar 2010-01-01 With the proliferation of mobile computing devices, the demand for continuous network connectivity regardless of physical location has spurred interest in the use of mobile ad hoc networks. Since Transmission Control Protocol (TCP) is the standard network protocol for communication in the internet, any wireless network with Internet service need to be compatible with TCP. TCP is tuned to perform well in traditional wired networks, where packet losses occur mostly because of congestion. However, TCP connections in Ad-hoc mobile networks are plagued by problems such as high bit error rates, frequent route changes, multipath routing and temporary network partitions. The throughput of TCP over such connection is not satisfactory, because TCP misinterprets the packet loss or delay as congestion and invokes congestion control and avoidance algorithm. In this research, the performance of TCP in Adhoc mobile network with high Bit Error rate (BER) and mobility is studied and investigated. Simulation model is implement... 19. Virtual reality mobility model for wireless ad hoc networks Yu Ziyue; Gong Bo; He Xingui 2008-01-01 For wireless ad hoc networks simulation.node's mobility pattern and traffic pattern are two key elements.A new simulation model is presented based on the virtual reality collision detection algorithm in obstacle environment,and the model uses the path planning method to avoid obstacles and to compute the node's moving path.Obstacles also affect node's signal propagation.Considering these factors,this study implements the mobility model for wireless ad hoc networks.Simulation results show that the model has a significant impact on the performance of protocols. 20. Flying Ad-Hoc Networks: Routing Protocols, Mobility Models, Issues Muneer Bani Yassein; “Nour Alhuda” Damer 2016-01-01 Flying Ad-Hoc Networks (FANETs) is a group of Unmanned Air Vehicles (UAVs) which completed their work without human intervention. There are some problems in this kind of networks: the first one is the communication between (UAVs). Various routing protocols introduced classified into three categories, static, proactive, reactive routing protocols in order to solve this problem. The second problem is the network design, which depends on the network mobility, in which is the process of cooperati... 1. Interference in wireless ad hoc networks with smart antennas Alabdulmohsin, Ibrahim 2014-08-01 In this paper, we show that the use of directional antennas in wireless ad hoc networks can actually increase interference due to limitations of virtual carrier sensing. We derive a simple mathematical expression for interference in both physical and virtual carrier sense networks, which reveals counter-intuitively that receivers in large dense networks with directional antennas can experience larger interference than in omnidirectional networks unless the beamwidth is sufficiently small. Validity of mathematical analysis is confirmed using simulations. 2. Security Challenges and Attacks in Mobile Ad Hoc Networks CH.V. Raghavendran 2013-09-01 Full Text Available Mobile Ad hoc Network (MANET is an autonomous collection of mobile nodes that form a temporary network without of any existing network infrastructure or central access point. The popularity of these networks created security challenges as an important issue. The traditional routing protocols perform well with dynamically changing topology but are not designed to defense against security challenges. In this paper we discuss about current challenges in an ad hoc environment which includes the different types of potential attacks that are possible in the Mobile Ad hoc Networks that can harm its working and operation. We have done literature study and gathered information relating to various types of attacks. In our study, we have found that there is no general algorithm that suits well against the most commonly known attacks. But the complete security solution requires the prevention, detection and reaction mechanisms applied in MANET. To develop suitable security solutions for such environments, we must first understand how MANETs can be attacked. This paper provides a comprehensive study of attacks against mobile ad hoc networks. We present a detailed classification of the attacks against MANETs. 3. Study on Sinkhole Attacks in Wireless Ad hoc Networks GAGANDEEP 2012-06-01 Full Text Available Wireless ad hoc network is a collection of wireless mobile nodes that dynamically self-organize in arbitrary and temporary network topologies. As compared to conventional network, wireless ad hocnetwork are more vulnerable to the security attacks. The nature and structure of wireless ad hoc network makes it very attractive to attackers, because there is no fixed infrastructure and administrativeapproach in it. “Sinkhole attack” is one of the severe attacks in this type of network; this makes trustable nodes to malicious nodes that result in loss of secure information. This paper focuses on sinkhole attacks on routing protocols such as DSR, AODV. To overcome the problems occur due to sinkhole we discuss about Security-aware routing (SAR which helps to reduce the impact of such attack. 4. BCR Routing for Intermittently Connected Mobile Ad hoc Networks S. RAMESH 2014-03-01 Full Text Available The Wireless and the Mobile Networks appear to provide a wide range of applications. Following these, the Mobile Ad hoc Networks (MANET aid in wide development of many applications. The achievement of the real world applications are attained through effective routing. The Intermittently Connected Mobile Ad hoc Network (ICMANET is a sparse network where a full connectivity is never possible. ICMANET is a disconnected MANET and is also a Delay Tolerant Network (DTN that sustains for higher delays. The routing in a disseminated network is a difficult task. A new routing scheme called Bee Colony Routing (BCR is been proposed with a motto of achieving optimal result in delivering the data packet towards the destined node. BCR is proposed with the basis of Bee Colony Optimization technique (BCO. The routing in ICMNAET is done by means of Bee routing protocol. This paper enchants a novel routing methodology for data transmission in ICMANET. 5. Energy management in wireless cellular and ad-hoc networks Imran, Muhammad; Qaraqe, Khalid; Alouini, Mohamed-Slim; Vasilakos, Athanasios 2016-01-01 This book investigates energy management approaches for energy efficient or energy-centric system design and architecture and presents end-to-end energy management in the recent heterogeneous-type wireless network medium. It also considers energy management in wireless sensor and mesh networks by exploiting energy efficient transmission techniques and protocols. and explores energy management in emerging applications, services and engineering to be facilitated with 5G networks such as WBANs, VANETS and Cognitive networks. A special focus of the book is on the examination of the energy management practices in emerging wireless cellular and ad hoc networks. Considering the broad scope of energy management in wireless cellular and ad hoc networks, this book is organized into six sections covering range of Energy efficient systems and architectures; Energy efficient transmission and techniques; Energy efficient applications and services. . 6. ON THE CAPACITY REGION OF WIRELESS AD HOC RELAY NETWORKS Dai Qinyun; Yao Wangsheng; Peng Jianmin; Su Gang 2006-01-01 Network capacity is a key characteristic to evaluate the performance of wireless networks. The goal of this paper is to study the capacity of wireless ad hoc relay network. In the model, there is at most ns source nodes transmitting signal simultaneously in the network and the arbitrarily complex network coding is allowed. The upper capacity bound of the network model are derived from the max-flow min-cut theorem and the lower capacity bound are obtained by the rate-distortion function for the Gaussian source. Finally, simulation results show that the upper network capacity will decrease as the number of source nodes is increased. 7. Providing Location Security in Vehicular Ad Hoc Networks Yan, Gongjun 2010-01-01 Location is fundamental information in Vehicular Ad-hoc Networks (VANETs). Almost all VANET applications rely on location information. Therefore it is of importance to ensure location information integrity, meaning that location information is original (from the generator), correct (not bogus or fabricated) and unmodified (value not changed). We… Yang, Y.; Heijenk, Gerhard J.; Haverkort, Boudewijn R.H.M. 2009-01-01 This paper presents a simple resource control mechanism with traffic scheduling for 2-hop ad-hoc networks, in which the Request-To-Send (RTS) packet is utilized to deliver feedback information. With this feedback information, the Transmission Opportunity (TXOP) limit of the sources can be controlled 9. Challenges of evidence acquisition in wireless ad-hoc networks Mutanga, MB 2010-05-01 Full Text Available a big challenge. Thus, the aim of this paper is to explore the challenges of acquiring live evidence in wireless ad-hoc networks. We also give some legal requirements of evidence admissibility as outlined in the Communications and Transactions Act... 10. Internet Connectivity using Vehicular Ad-Hoc Networks Hashim Ali; Aamir Saeed; Syed Rohullah Jan; Asadullah; Ahsan Khawaja 2012-01-01 Although a mobile Ad-Hoc network (MANET) can be used in many cases but the most preferable is a MANET connected to the internet. This is achieved by using gateways which act as bridges between a MANET and the internet. To communicate in-between, a mobile node needs to find a valid route to the gateway which requires gateway discovery mechanism. In this paper Ad hoc On-Demand Distance Vector (AODV) is altered to achieve the interconnection between a MANET and the Internet. Furthermore, the pap... 11. Dynamic Encryption Technology in Ad-hoc Networks JIN Zhao-hui; WANG Shun-man; XU Kai; LIANG Qing 2007-01-01 A new dynamic encryption application in ad-hoc networks is proposed. The advantages of this method are its being able to use the previous ciphertext as a seed of a new encryption process, rendering the encryption process effective in all communication process by continuous dynamic key generation together with synchronization, and its capability to cut back on system bandages to a greater extent, which is valuable for the ad-hoc circumstance. In addition, the rationality and effectiveness of this novel encryption method have been verified by the test results. 12. A Secure Multi-Routing Platform for Ad Hoc Network LU She-jie; CHEN Jing; XIONG Zai-hong 2008-01-01 In an ad hoc network, it is usually difficult to optimize the assignment of network routing resources using a single type of routing protocol due to the differences in network scale, node moving mode and node distribution. Therefore, it is desirable to have nodes run multiple routing protocols simultaneously so that more than one protocols can be chosen to work jointly. For this purpose,a multiple routing platform for Ad hoc networks is proposed on a higher level of current routing protocols. In order to ensure the security of the platform, a security mechanism and its formal analysis by BAN logic is given. The simulation results of the network performance demonstrate that the proposed multi-routing platform is practicable in some complex applications. 13. Distributed Reinforcement Learning Approach for Vehicular Ad Hoc Networks Wu, Celimuge; Kumekawa, Kazuya; Kato, Toshihiko In Vehicular Ad hoc Networks (VANETs), general purpose ad hoc routing protocols such as AODV cannot work efficiently due to the frequent changes in network topology caused by vehicle movement. This paper proposes a VANET routing protocol QLAODV (Q-Learning AODV) which suits unicast applications in high mobility scenarios. QLAODV is a distributed reinforcement learning routing protocol, which uses a Q-Learning algorithm to infer network state information and uses unicast control packets to check the path availability in a real time manner in order to allow Q-Learning to work efficiently in a highly dynamic network environment. QLAODV is favored by its dynamic route change mechanism, which makes it capable of reacting quickly to network topology changes. We present an analysis of the performance of QLAODV by simulation using different mobility models. The simulation results show that QLAODV can efficiently handle unicast applications in VANETs. 14. Simulator for Energy Efficient Clustering in Mobile Ad Hoc Networks Amit Kumar 2012-05-01 Full Text Available The research on various issues in Mobile ad hoc networks is getting popular because of its challenging nature and all time connectivity to communicate. Network simulators provide the platform to analyse and imitate the working of the nodes in the networks along with the traffic and other entities. The current work proposes the design of a simulator for the mobile ad hoc networks that provides a test bed for the energy efficient clustering in the dynamic network. Node parameters like degree of connectivity and average transmission power are considered for calculating the energy consumption of the mobile devices. Nodes that consume minimum energy among their 1-hop neighbours are selected as the cluster heads. 彭伟; 卢锡城 2001-01-01 Broadcast is an important operation in many network protocols. It is utilized to discover routes to unknown nodes in mobile ad hoc networks (MANETs) and is the key factor in scaling on-demand routing protocols to large networks. This paper presents the Ad Hoc Broadcast Protocol (AHBP) and its performance is discussed. In the protocol, messages are only rebroadcast by broadcast relay gateways that constitute a connected dominating set of the network. AHBP can efficiently reduce the redundant messages which make flooding-like protocols perform badly in large dense networks. Simulations are conducted to determine the performance characteristics of the protocol. The simulation results have shown excellent reduction of broadcast redundancy with AHBP. It also contributes to a reduced level of broadcast collision and congestion. 16. Realtime multiprocessor for mobile ad hoc networks T. Jungeblut 2008-05-01 Full Text Available This paper introduces a real-time Multiprocessor System-On-Chip (MPSoC for low power wireless applications. The multiprocessor is based on eight 32bit RISC processors that are connected via an Network-On-Chip (NoC. The NoC follows a novel approach with guaranteed bandwidth to the application that meets hard realtime requirements. At a clock frequency of 100 MHz the total power consumption of the MPSoC that has been fabricated in 180 nm UMC standard cell technology is 772 mW. 2010-08-01 Reliable global networking is essential for a rapidly growing mobile and interactive communication. Satellite communication plays already a significant role in this subject. However, the classical space-based data transmission requires an appropriate infrastructure, both on the ground and in orbit. This paper discusses the potential of a self-organising distributed satellite system in Low Earth Orbits (LEO) to achieve a seamless integration in already existing infrastructures. The communication approach is based on dynamic Inter Satellite Links (ISL) not controlled nor coordinated on an individual basis from the ground-based stations. 18. Authentication Based on Multilayer Clustering in Ad Hoc Networks Suh Heyi-Sook 2005-01-01 Full Text Available In this paper, we describe a secure cluster-routing protocol based on a multilayer scheme in ad hoc networks. This work provides scalable, threshold authentication scheme in ad hoc networks. We present detailed security threats against ad hoc routing protocols, specifically examining cluster-based routing. Our proposed protocol, called "authentication based on multilayer clustering for ad hoc networks" (AMCAN, designs an end-to-end authentication protocol that relies on mutual trust between nodes in other clusters. The AMCAN strategy takes advantage of a multilayer architecture that is designed for an authentication protocol in a cluster head (CH using a new concept of control cluster head (CCH scheme. We propose an authentication protocol that uses certificates containing an asymmetric key and a multilayer architecture so that the CCH is achieved using the threshold scheme, thereby reducing the computational overhead and successfully defeating all identified attacks. We also use a more extensive area, such as a CCH, using an identification protocol to build a highly secure, highly available authentication service, which forms the core of our security framework. 19. Auto-Configuration Protocols in Mobile Ad Hoc Networks Villalba, Luis Javier García; Matesanz, Julián García; Orozco, Ana Lucila Sandoval; Díaz, José Duván Márquez 2011-01-01 The TCP/IP protocol allows the different nodes in a network to communicate by associating a different IP address to each node. In wired or wireless networks with infrastructure, we have a server or node acting as such which correctly assigns IP addresses, but in mobile ad hoc networks there is no such centralized entity capable of carrying out this function. Therefore, a protocol is needed to perform the network configuration automatically and in a dynamic way, which will use all nodes in the network (or part thereof) as if they were servers that manage IP addresses. This article reviews the major proposed auto-configuration protocols for mobile ad hoc networks, with particular emphasis on one of the most recent: D2HCP. This work also includes a comparison of auto-configuration protocols for mobile ad hoc networks by specifying the most relevant metrics, such as a guarantee of uniqueness, overhead, latency, dependency on the routing protocol and uniformity. PMID:22163814 20. Auto-Configuration Protocols in Mobile Ad Hoc Networks Julián García Matesanz 2011-03-01 Full Text Available The TCP/IP protocol allows the different nodes in a network to communicate by associating a different IP address to each node. In wired or wireless networks with infrastructure, we have a server or node acting as such which correctly assigns IP addresses, but in mobile ad hoc networks there is no such centralized entity capable of carrying out this function. Therefore, a protocol is needed to perform the network configuration automatically and in a dynamic way, which will use all nodes in the network (or part thereof as if they were servers that manage IP addresses. This article reviews the major proposed auto-configuration protocols for mobile ad hoc networks, with particular emphasis on one of the most recent: D2HCP. This work also includes a comparison of auto-configuration protocols for mobile ad hoc networks by specifying the most relevant metrics, such as a guarantee of uniqueness, overhead, latency, dependency on the routing protocol and uniformity. 1. Analyzing Reactive Routing Protocols in Mobile Ad Hoc Networks Dr. Kamaljit I. Lakhtaria 2012-05-01 2. Implementing Smart Antenna System in Mobile Ad Hoc Networks Supriya Kulkarni P 2014-06-01 Full Text Available As the necessity of exchanging and sharing data increases, users demand easy connectivity, and fast networks whether they are at work, at home, or on the move. Nowadays, users are interested in interconnecting all their personal electronic devices (PEDs in an ad hoc fashion on the move. This type of network is referred to as Mobile Ad hoc NETwork (MANET. When in such network a smart antenna System (SAS is implemented then we can achieve maximum capacity and improve the quality and coverage. So we are intended to implement such a SAS in the MANET. In this paper we have shown significance of Throughput and Bit Error Rate by implementing SAS in MANET using MATLABR2010a. 3. A Simplified Mobile Ad Hoc Network Structure for Helicopter Communication 2016-01-01 Full Text Available There are a number of volunteer and statutory organizations who are capable of conducting an emergency response using helicopters. Rescue operations require a rapidly deployable high bandwidth network to coordinate necessary relief efforts between rescue teams on the ground and helicopters. Due to massive destruction and loss of services, ordinary communication infrastructures may collapse in these situations. Consequently, information exchange becomes one of the major challenges in these circumstances. Helicopters can be also employed for providing many services in rugged environments, military applications, and aerial photography. Ad hoc network can be used to provide alternative communication link between a set of helicopters, particularly in case of significant amount of data required to be shared. This paper addresses the ability of using ad hoc networks to support the communication between a set of helicopters. A simplified network structure model is presented and extensively discussed. Furthermore, a streamlined routing algorithm is proposed. Comprehensive simulations are conducted to evaluate the proposed routing algorithm. 4. Efficient Resource Management for Multicast Ad Hoc Networks: Survey Amit Chopra 2016-09-01 Full Text Available Group communication over multicast ad hoc network suffers from insufficient utilization of limited resources, i.e. shared channel, battery, data processing capabilities and storage space etc. Multicast routing protocol should be able to manage all these resources because their consumption depends upon different factors, i.e. Unicast/Multicast network operations, dynamic topology due to mobility, control overhead due to scalability, packet loss and retransmission due to collision and congestion etc. All these factors may cause unnecessary network load, delay and unfair resource utilization. However, multicast ad hoc routing protocols are more efficient than Unicast routing protocols, but they also suffer from performance degradation factors discussed above. Researchers have developed various layer wise solutions for resource optimization. In this paper, we will explore the different schemes for fair utilization of network resources. 5. Power Control in Multi-cluster Mobile Ad hoc Networks JINYanliang; YANGYuhang 2003-01-01 Power control gives us many advantages including power saving, lower interference, and efficient channel utilization. We proposed two clustering algorithms with power control for multl-cluster mobile ad hoc networks in this paper. They improve the network throughput and the network stability as compared to other ad hoc networks in which all mobile nodes use the same transmission power. Furthermore, they help in reducing the system power consumption. We compared the performances of the two approaches. Simulation results show that the DCAP (Distributed clustering algorithm with power control) would achieve a better throughput performance and lower power consumption than the CCAP (Centralized clustering algorithm with power control), but it is complicated and liable to be affected by node velocity. 6. Capacity of Wireless Ad Hoc Networks with Opportunistic Collaborative Communications Simeone O 2007-01-01 Full Text Available Optimal multihop routing in ad hoc networks requires the exchange of control messages at the MAC and network layer in order to set up the (centralized optimization problem. Distributed opportunistic space-time collaboration (OST is a valid alternative that avoids this drawback by enabling opportunistic cooperation with the source at the physical layer. In this paper, the performance of OST is investigated. It is shown analytically that opportunistic collaboration outperforms (centralized optimal multihop in case spatial reuse (i.e., the simultaneous transmission of more than one data stream is not allowed by the transmission protocol. Conversely, in case spatial reuse is possible, the relative performance between the two protocols has to be studied case by case in terms of the corresponding capacity regions, given the topology and the physical parameters of network at hand. Simulation results confirm that opportunistic collaborative communication is a promising paradigm for wireless ad hoc networks that deserves further investigation. 7. Multicost Routing Approach in Wireless Ad hoc Networks P. Ramamoorthy 2012-01-01 8. A new traffic allocation algorithm in Ad hoc networks LI Xin; MIAO Jian-song; SUN Dan-dan; ZHOU Li-gang; DING Wei 2006-01-01 A dynamic traffic distribution algorithm based on the minimization product of packet delay and packet energy consumption is proposed. The algorithm is based on packet delay and energy consumption in allocating traffic, which can optimize the network performance. Simulation demonstrated that the algorithm could dynamically adjust to the traffic distribution between paths, which can minimize the product of packet delay and energy consumption in mobile Ad hoc networks. 9. Journey from Mobile Ad Hoc Networks to Wireless Mesh Networks Wang, Junfang; Xie, Bin; Agrawal, Dharma P. A wireless mesh network (WMN) is a particular type of mobile ad hoc network (MANET), which aims to provide ubiquitous high bandwidth access for a large number of users. A pure MANET is dynamically formed by mobile devices without the requirement of any existing infrastructure or prior network configuration. Similar to MANETs, a WMN also has the ability of self-organization, self-discovering, self-healing, and self-configuration. However, a WMN is typically a collection of stationary mesh routers (MRs) with each employing multiple radios. Some MRs have wired connections and act as the Internet gateways (IGWs) to provide Internet connectivity for other MRs. These new features of WMNs over MANETs enable them to be a promising alternative for high broadband Internet access. In this chapter, we elaborate on the evolution from MANETs to WMNs and provide a comprehensive understanding of WMNs from theoretical aspects to practical protocols, while comparing it with MANETs. In particular, we focus on the following critical issues with respect to WMN deployment: Network Capacity, Positioning Technique, Fairness Transmission and Multiradio Routing Protocols. We end this chapter with some open problems and future directions in WMNs. 10. Bandwidth Estimation For Mobile Ad hoc Network (MANET Rabia Ali 2011-09-01 Full Text Available In this paper we presents bandwidth estimation scheme for MANET, which uses some components of the two methods for the bandwidth estimation: 'Hello Bandwidth Estimation 'Listen Bandwidth Estimation. This paper also gives the advantages of the proposed method. The proposed method is based on the comparison of these two methods. Bandwidth estimation is an important issue in the Mobile Ad-hoc Network (MANET because bandwidth estimation in MANET is difficult, because each host has imprecise knowledge of the network status and links change dynamically. Therefore, an effective bandwidth estimation scheme for MANET is highly desirable. Ad hoc networks present unique advanced challenges, including the design of protocols for mobility management, effective routing, data transport, security, power management, and quality-of-service (QoS provisioning. Once these problems are solved, the practical use of MANETs will be realizable. 11. A Novel Routing Scheme for Mobile Ad Hoc Network Prem Chand 2013-04-01 Full Text Available Mobile Ad hoc Network (MANET is a collection of mobile users without any support of fixed infrastructure. The nodes in these networks have several constraints such as transmission power, bandwidth and processing capability. In addition to it an important parameter of interest is the residual battery power of the nodes. Conventional routing schemes do not take this aspect into consideration. Therefore this paper proposes a routing strategy that takes this aspect into consideration by modifying the Route Request (RREQ packet of the Ad hoc On demand Distance Vector (AODV routing protocol. The protocol chooses a threshold below which a node is not allowed to relay data/control packets. The results show a remarkable improvement in the value of Packet Delivery Ratio (PDR, throughput and at the same time the network lifetime is not affected. 12. Beamforming in Ad Hoc Networks: MAC Design and Performance Modeling Khalil Fakih 2009-01-01 Full Text Available We examine in this paper the benefits of beamforming techniques in ad hoc networks. We first devise a novel MAC paradigm for ad hoc networks when using these techniques in multipath fading environment. In such networks, the use of conventional directional antennas does not necessarily improve the system performance. On the other hand, the exploitation of the potential benefits of smart antenna systems and especially beamforming techniques needs a prior knowledge of the physical channel. Our proposition performs jointly channel estimation and radio resource sharing. We validate the fruitfulness of the proposed MAC and we evaluate the effects of the channel estimation on the network performance. We then present an accurate analytical model for the performance of IEEE 802.11 MAC protocol. We extend the latter model, by introducing the fading probability, to derive the saturation throughput for our proposed MAC when the simplest beamforming strategy is used in real multipath fading ad hoc networks. Finally, numerical results validate our proposition. 13. Cost management based security framework in mobile ad hoc networks 2006-01-01 Security issues are always difficult to deal with in mobile ad hoc networks. People seldom studied the costs of those security schemes respectively and for some security methods designed and adopted beforehand, their effects are often investigated one by one. In fact, when facing certain attacks, different methods would respond individually and result in waste of resources.Making use of the cost management idea, we analyze the costs of security measures in mobile ad hoc networks and introduce a security framework based on security mechanisms cost management. Under the framework, the network system's own tasks can be finished in time and the whole network's security costs can be decreased. We discuss the process of security costs computation at each mobile node and in certain nodes groups. To show how to use the proposed security framework in certain applications, we give examples of DoS attacks and costs computation of defense methods. The results showed that more secure environment can be achieved based on the security framework in mobile ad hoc networks. 14. Recent development in wireless sensor and ad-hoc networks Li, Xiaolong; Yang, Yeon-Mo 2015-01-01 Wireless Sensor Network (WSN) consists of numerous physically distributed autonomous devices used for sensing and monitoring the physical and/or environmental conditions. A WSN uses a gateway that provides wireless connectivity to the wired world as well as distributed networks. There are many open problems related to Ad-Hoc networks and its applications. Looking at the expansion of the cellular infrastructure, Ad-Hoc network may be acting as the basis of the 4th generation wireless technology with the new paradigm of ‘anytime, anywhere communications’. To realize this, the real challenge would be the security, authorization and management issues of the large scale WSNs. This book is an edited volume in the broad area of WSNs. The book covers various chapters like Multi-Channel Wireless Sensor Networks, its Coverage, Connectivity as well as Deployment. It covers comparison of various communication protocols and algorithms such as MANNET, ODMRP and ADMR Protocols for Ad hoc Multicasting, Location Based C... 15. Wireless ad hoc and sensor networks management, performance, and applications He, Jing 2013-01-01 Although wireless sensor networks (WSNs) have been employed across a wide range of applications, there are very few books that emphasize the algorithm description, performance analysis, and applications of network management techniques in WSNs. Filling this need, Wireless Ad Hoc and Sensor Networks: Management, Performance, and Applications summarizes not only traditional and classical network management techniques, but also state-of-the-art techniques in this area. The articles presented are expository, but scholarly in nature, including the appropriate history background, a review of current 16. Performance Comparisons of Routing Protocols in Mobile Ad Hoc Networks Manickam, P; Girija, M; Manimegalai, Dr D; 10.5121/ijwmn.2011.3109 2011-01-01 Mobile Ad hoc Network (MANET) is a collection of wireless mobile nodes that dynamically form a network temporarily without any support of central administration. Moreover, Every node in MANET moves arbitrarily making the multi-hop network topology to change randomly at unpredictable times. There are several familiar routing protocols like DSDV, AODV, DSR, etc...which have been proposed for providing communication among all the nodes in the network. This paper presents a performance comparison of proactive and reactive protocols DSDV, AODV and DSR based on metrics such as throughput, packet delivery ratio and average end-to-end delay by using the NS-2 simulator. 17. Network Size and Connectivity in Mobile and Stationary Ad Hoc Networks 2014-05-01 mobile ad hoc networks ( MANETs ) is that routes consisting of multiple hops will be available to connect those nodes that lack line-of- sight connectivity...SUPPLEMENTARY NOTES 14. ABSTRACT One of the assumptions behind tactical mobile ad hoc networks ( MANETs ) is that routes consisting of multiple hops will be...Network Size and Connectivity in Mobile and Stationary Ad Hoc Networks Lance Joneckis Corinne Kramer David Sparrow David Tate I N S T I T U T E 18. Safety Message Power Transmission Control for Vehicular Ad hoc Networks Samara, Ghassan; Al-Salihy, Wafaa A H 2010-01-01 Vehicular Ad hoc Networks (VANET) is one of the most challenging research area in the field of Mobile Ad Hoc Networks. In this research we proposed a dynamic power adjustment protocol that will be used for sending the periodical safety message. (Beacon)based on the analysis of the channel status depending on the channel congestion and the power used for transmission. The Beacon Power Control (BPC) protocol first sensed and examined the percentage of the channel congestion, the result obtained was used to adjust the transmission power for the safety message to reach the optimal power. This will lead to decrease the congestion in the channel and achieve good channel performance and beacon dissemination. 19. Proposal of interference reduction routing for ad-hoc networks Katsuhiro Naito 2010-10-01 Full Text Available In this paper, we propose an interference reduction routing protocol for ad-hoc networks. The interference is one of the degradation factors in wireless communications. In the ad-hoc network, some nodes communicate simultaneously. Therefore, these communications cause interference each other, and some packets are corrupted due to interference from another node. In the proposed protocol, each node estimates required transmission power according to hello messages. Therefore, the node can transmit a data packet with minimum required transmission power. Consequently, the interference against neighbor nodes can be reduced. From simulation results, we can find that the proposed protocol can reduce the number of control messages and can improve the throughput performance. 20. Proposal of interference reduction routing for ad-hoc networks Katsuhiro Naito 2010-10-01 Full Text Available In this paper, we propose an interference reduction routing protocol for ad-hoc networks. The interference is one of the degradation factors in wireless communications. In the ad-hoc network, some nodes communicate simultaneously. Therefore, these communications cause interference each other, and some packets are corrupted due to interference from another node. In the proposed protocol, each node estimates required transmission power according to hello messages. Therefore, the node can transmit a data packet with minimum required transmission power. Consequently, the interference against neighbor nodes can be reduced. From simulation results, we can find that the proposed protocol can reduce the number of control messages and can improve the throughput performance. 1. Secure Ad Hoc Networking on an Android Platform 2014-05-01 describes a proto- type implementation of a secure ad hoc networking system for Commercial Off The Shelf (COTS) Android platforms with a focus on...TN–1390 UNCLASSIFIED 6 Future Work 18 References 19 Appendices A Certificate Authority Key Generation 22 B CSR to CA Signed Certificate 23 C SE...Authority COTS Commercial Off The Shelf CPU Central Processing Unit CSR Certificate Signing Request DARPA Defense Advanced Research Projects Agency DIA 2. Enhanced Reputation Mechanism for Mobile Ad Hoc Networks Liu, Jinshan; Issarny, Valérie 2004-01-01 International audience; Interactions between entities unknown to each other are inevitable in the ambient intelligence vision of service access anytime, anywhere. Trust management through a reputation mechanism to facilitate such interactions is recognized as a vital part of mobile ad hoc networks, which features lack of infrastructure, autonomy, mobility and resource scarcity of composing light-weight terminals. However, the design of a reputation mechanism is faced by challenges of how to e... 3. Mobile Ad-Hoc Networking on Android Devices 2014-03-01 done to take advantage of High Performance Computer ( HPC ) resources seeded within the mobile ad-hoc network. Having access to HPC resources allows the...are provided by the device manufacturers . Because of this, we could enable IBSS with the same modified kernel. Unfortunately, using the same chipset and...source project called MANET Manager (9), which ports the OLSR daemon to Android devices. With this additional software we were able to successfully 4. Towards More Realistic Mobility Model in Vehicular Ad Hoc Network 2012-01-01 Mobility models or the movement patterns of nodes communicating wirelessely, play a vital role in the simulation-based evaluation of vehicular Ad Hoc Networks (VANETs). Even though recent research has developed models that better corresponds to real world mobility, we still have a limited understanding of the level of the required level of mobility details for modeling and simulating VANETs. In this paper, we propose a new mobility model for VANETs that works on the city area and map the topo... 5. An Algorithm for Localization in Vehicular Ad-Hoc Networks Hajar Barani 2010-01-01 Full Text Available Positioning a node in Vehicular Ad-Hoc networks is one of the most interested research areas in recent years. In many Ad-Hoc networks such as Vehicular Ad-Hoc networks in which the nodes are considered as vehicles, move very fast in streets and highways. So, to have a safe and fast transport system, any vehicle should know where a traffic problem such as a broken vehicle occurs. GPS is one of the equipments which have been widely used for positioning service. Problem statement: Vehicle can use a GPS receiver to determine its position. But, all vehicles have not been equipped with GPS or they cannot receive GPS signals in some places such as inside of a tunnel. In these situations, the vehicle should use a GPS free method to find its location. Approach: In this study, a new method based on transmission range had been suggested. Results: This algorithm had been compared with a similar algorithm ODAM in same situations. The best performance for Optimized Disseminating Alarm Message (ODAM is when 40% of nodes are equipped with GPS. Conclusion: We executed our algorithm on this situation and compared it with ODAM results. At the end of this study, we can see our algorithm in compare to ODAM has better results. 6. MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm Omar Elizarraras 2014-01-01 Full Text Available The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15% compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. 7. High Secure Fingerprint Authentication in Ad hoc Network P.Velayutham 2010-03-01 Full Text Available In this paper, the methodology proposed is an novel robust approach on secure fingerprint authentication and matching techniques to implement in ad-hoc wireless networks. This is a difficult problem in ad-hoc network, as it involves bootstrapping trust between the devices. This journal would present a solution, which providesfingerprint authentication techniques to share their communication in ad-hoc network. In this approach, devices exchange a corresponding fingerprint with master device for mutual communication, which will then allow them to complete an authenticated key exchange protocol over the wireless link. The solution based on authenticating user fingerprint through the master device, and this master device handshakes with the corresponding slave device for authenticating the fingerprint all attacks on the wireless link, and directly captures the user's device that was proposed to talk to a particular unknown device mentioned previously in their physical proximity. The system is implemented in C# and the user node for a variety of different devices with Matlab. 8. Mobile ad hoc networking the cutting edge directions Basagni, Stefano; Giordano, Silvia; Stojmenovic, Ivan 2013-01-01 ""An excellent book for those who are interested in learning the current status of research and development . . . [and] who want to get a comprehensive overview of the current state-of-the-art.""-E-Streams This book provides up-to-date information on research and development in the rapidly growing area of networks based on the multihop ad hoc networking paradigm. It reviews all classes of networks that have successfully adopted this paradigm, pointing out how they penetrated the mass market and sparked breakthrough research. Covering both physical issues and applica 9. Power-Aware Intrusion Detection in Mobile Ad Hoc Networks Şen, Sevil; Clark, John A.; Tapiador, Juan E. Mobile ad hoc networks (MANETs) are a highly promising new form of networking. However they are more vulnerable to attacks than wired networks. In addition, conventional intrusion detection systems (IDS) are ineffective and inefficient for highly dynamic and resource-constrained environments. Achieving an effective operational MANET requires tradeoffs to be made between functional and non-functional criteria. In this paper we show how Genetic Programming (GP) together with a Multi-Objective Evolutionary Algorithm (MOEA) can be used to synthesise intrusion detection programs that make optimal tradeoffs between security criteria and the power they consume. 10. EXTENDED DYNAMIC SOURCE ROUTING PROTOCOL FOR AD HOC NETWORK Puja Kumari Sharma 2012-07-01 Full Text Available MANET is a collection of self-configurable mobile nodes. Several routing protocols are proposed for ad hoc network among which DSR and AODV On demand routing protocols are mostly used. Existing Dynamic source routing protocol is not suitable for large network because packet size gets increased according to the number of nodes travelled by route discovery packet. In this paper, extended DSR routing protocol is proposed to eliminate the above limitation of existing DSR. Proposed protocol will be suitable for small and large both types of networks. 11. Connectivity analysis of one-dimensional ad-hoc networks Hansen, Martin Bøgsted; Rasmussen, Jakob Gulddahl; Schwefel, Hans-Peter Applications and communication protocols in dynamic ad-hoc networks are exposed to physical limitations imposed by the connectivity relations that result from mobility. Motivated by vehicular freeway scenarios, this paper analyzes a number of important connectivity metrics for instantaneous...... hop-count; (3) the connectivity distance, expressing the geographic distance that a message can be propagated in the network on multi-hop paths; (4) the connectivity hops, which corresponds to the number of hops that are necessary to reach all nodes in the connected network. The paper develops... 12. Scalable Overlay Multicasting in Mobile Ad Hoc Networks (SOM Pariza Kamboj 2010-10-01 Full Text Available Many crucial applications of MANETs like the battlefield, conference and disaster recovery defines the needs for group communications either one-to-many or many-to-many form. Multicast plays an important role in bandwidth scarce multihop mobile ad hoc networks comprise of limited battery power mobile nodes. Multicast protocols in MANETs generate many controls overhead for maintenance of multicast routingstructures due to frequent changes of network topology. Bigger multicast tables for the maintenance of network structures resultsin inefficient consumption of bandwidth of wireless links andbattery power of anemic mobile nodes, which in turn, pose thescalability problems as the network size is scaled up. However,many MANET applications demands scalability from time to time. Multicasting for MANETs, therefore, needs to reduce the state maintenance. As a remedy to these shortcomings, this paper roposes an overlay multicast protocol on application layer. In the proposed protocol titled “Scalable Overlay Multicasting in Mobile Ad Hoc Networks (SOM” the network nodes construct overlay hierarchical framework to reduce the protocols states and constrain their distribution within limited scope. Based on zone around each node, it constructs a virtual structure at application layer mapped with the physical topology at network layer, thus formed two levels of hierarchy. The concept of two level hierarchies reduces the protocol state maintenance and hence supports the vertical scalability. Protocol depends on the location information obtained using a distributed location service, which effectively reduces the overhead for route searching and updating the source based multicast tree. 13. Vehicular Ad Hoc and Sensor Networks; Principles and Challenges Piran, Mohammad Jalil; Babu, G Praveen 2011-01-01 The rapid increase of vehicular traffic and congestion on the highways began hampering the safe and efficient movement of traffic. Consequently, year by year, we see the ascending rate of car accidents and casualties in most of the countries. Therefore, exploiting the new technologies, e.g. wireless sensor networks, is required as a solution of reduction of these saddening and reprehensible statistics. This has motivated us to propose a novel and comprehensive system to utilize Wireless Sensor Networks for vehicular networks. We coin the vehicular network employing wireless Sensor networks as Vehicular Ad Hoc and Sensor Network, or VASNET in short. The proposed VASNET is particularly for highway traffic .VASNET is a self-organizing Ad Hoc and sensor network comprised of a large number of sensor nodes. In VASNET there are two kinds of sensor nodes, some are embedded on the vehicles-vehicular nodes- and others are deployed in predetermined distances besides the highway road, known as Road Side Sensor nodes (RSS... 14. Minimum Process Coordinated Checkpointing Scheme for Ad Hoc Networks Tuli, Ruchi 2011-01-01 The wireless mobile ad hoc network (MANET) architecture is one consisting of a set of mobile hosts capable of communicating with each other without the assistance of base stations. This has made possible creating a mobile distributed computing environment and has also brought several new challenges in distributed protocol design. In this paper, we study a very fundamental problem, the fault tolerance problem, in a MANET environment and propose a minimum process coordinated checkpointing scheme. Since potential problems of this new environment are insufficient power and limited storage capacity, the proposed scheme tries to reduce the amount of information saved for recovery. The MANET structure used in our algorithm is hierarchical based. The scheme is based for Cluster Based Routing Protocol (CBRP) which belongs to a class of Hierarchical Reactive routing protocols. The protocol proposed by us is nonblocking coordinated checkpointing algorithm suitable for ad hoc environments. It produces a consistent set of... 15. Analysis of Multipath Routing in Random Ad Hoc Networks Scenario Indrani Das 2014-10-01 Full Text Available In this study, we have proposed a multipath routing protocol for Mobile Ad Hoc Networks. Multipath routing overcomes various problems that occur in data delivery through a single path. The proposed protocol selects multiple neighbor nodes of source node to establish multiple paths towards destination. These nodes are selected based on their minimum remaining distance from destination. We have computed the length of various paths and average hops count for different node density in the network. We have considered only three paths for our evaluation. The results show that path-2 gives better results in term of hop count and path length among three paths. 16. A framework for reactive optimization in mobile ad hoc networks McClary, Dan; Syrotiuk, Violet; Kulahci, Murat 2008-01-01 We present a framework to optimize the performance of a mobile ad hoc network over a wide range of operating conditions. It includes screening experiments to quantify the parameters and interactions among parameters influential to throughput. Profile-driven regression is applied to obtain a model...... of the non-linear behaviour of throughput. The intermediate models obtained in this modelling effort are used to adapt the parameters as the network conditions change, in order to maximize throughput. The improvements in throughput range from 10-26 times the use of the default parameter settings... 17. Escape probability based routing for ad hoc networks Zhang Xuanping; Qin Zheng; Li Xin 2006-01-01 Routes in an ad hoc network may fail frequently because of node mobility. Stability therefore can be an important element in the design of routing protocols. The node escape probability is introduced to estimate the lifetime and stability of link between neighboring nodes and the escape probability based routing (EPBR) scheme to discover stable routes is proposed. Simulation results show that the EPBR can discover stable routes to reduce the number of route rediscovery, and is applicable for the situation that has highly dynamic network topology with broad area of communication. 18. A Timed Calculus for Mobile Ad Hoc Networks Mengying Wang 2012-12-01 Full Text Available We develop a timed calculus for Mobile Ad Hoc Networks embodying the peculiarities of local broadcast, node mobility and communication interference. We present a Reduction Semantics and a Labelled Transition Semantics and prove the equivalence between them. We then apply our calculus to model and study some MAC-layer protocols with special emphasis on node mobility and communication interference. A main purpose of the semantics is to describe the various forms of interference while nodes change their locations in the network. Such interference only occurs when a node is simultaneously reached by more than one ongoing transmission over the same channel. 2016-04-01 Full Text Available In this paper we present a new algorithm for clustering MANET by considering several parameters. This is a new adaptive load balancing technique for clustering out Mobile Ad-hoc Networks (MANET. MANET is special kind of wireless networks where no central management exits and the nodes in the network cooperatively manage itself and maintains connectivity. The algorithm takes into account the local capabilities of each node, the remaining battery power, degree of connectivity and finally the power consumption based on the average distance between nodes and candidate cluster head. The proposed algorithm efficiently decreases the overhead in the network that enhances the overall MANET performance. Reducing the maintenance time of broken routes makes the network more stable, reliable. Saving the power of the nodes also guarantee consistent and reliable network. 20. Simulation study for Mobile Ad hoc Networks Using DMAC Protocol Vikas Sejwar 2012-01-01 Full Text Available This paper addresses the issue of deafness problem in Mobile Ad Hoc Networks (MANETs using directional antennas. Directional antennas arebeneficial for wireless ad hoc networks consisting of a collection of wireless hosts. A suitable Medium Access Control (MAC protocol must be designed to best utilize directional antennas. Deafness is caused whentwo nodes are in ongoing transmission and a third node (Deaf Node wants to communicate with one of that node. But it get no response because transmission of two nodes are in process. Though directional antennas offer better spatial reuse, but this problem can have a serious impact on network performance. A New DMAC (Directional Medium Access Control protocol uses flags in DNAV (Directional Network Allocation Vector tables to maintain information regarding the transmissionbetween the nodes in the network and their neighbor’s location. Two performance matrices have been used to show the impact of New DMAC algorithm on Deafness problem using simulator. These are RTS Failure Ratio and RTS Retransmission due to timeout Kumar, Sumit; Mehfuz, Shabana 2016-08-01 2. Analysis on Ad Hoc Routing Protocols in Wireless Sensor Networks P.N.Renjith 2012-12-01 Full Text Available Outlook of wireless communication system marked an extreme transform with the invention of Wireless Sensor Networks (WSN. WSN is a promising technolog y for enabling a variety of applications like environmental monitoring, security and applications that save our lives and assets. In WSN, large numbers of sensor nodes are deployed to sensing and gathering information and forward them to the base station with the help of routing protocol. Routing protocols plays a major role by identifying and maintaining the routes in the network. Competence o f sensor networks relay on the strong and effective routing protocol used. In this paper, we present a simulation based performance evaluation of differen t Ad hoc routing protocols like AODV, DYMO, FSR, LANM AR, RIP and ZRP in Wireless Sensor Networks. Based on the study, the future research areas and k ey challenges for routing protocol in WSN are to optimize network performance for QoS support and en ergy conservation 3. On service differentiation in mobile Ad Hoc networks 张顺亮; 叶澄清 2004-01-01 A network model is proposed to support service differentiation for mobile Ad Hoc networks by combining a fully distributed admission control approach and the DIFS based differentiation mechanism of IEEE802.11. It can provide different kinds of QoS (Quality of Service) for various applications. Admission controllers determine a committed bandwidth based on the reserved bandwidth of flows and the source utilization of networks. Packets are marked when entering into networks by markers according to the committed rate. By the mark in the packet header, intermediate nodes handle the Received packets in different manners to provide applications with the QoS corresponding to the pre-negotiated profile.Extensive simulation experiments showed that the proposed mechanism can provide QoS guarantee to assured service traffic and increase the channel utilization of networks. 4. On service differentiation in mobile Ad Hoc networks 张顺亮; 叶澄清 2004-01-01 A network model is proposed to support service differentiation for mobile Ad Hoc networks by combining a fully distributed admission control approach and the DIFS based differentiation mechanism of IEEE802.11. It can provide different kinds of QoS (Quality of Service) for various applications. Admission controllers determine a committed bandwidth based on the reserved bandwidth of flows and the source utilization of networks. Packets are marked when entering into networks by markers according to the committed rate. By the mark in the packet header, intermediate nodes handle the received packets in different manners to provide applications with the QoS corresponding to the pre-negotiated profile. Extensive simulation experiments showed that the proposed mechanism can provide QoS guarantee to assured service traffic and increase the channel utilization of networks. 5. Autonomous Power Control MAC Protocol for Mobile Ad Hoc Networks 2006-01-01 Full Text Available Battery energy limitation has become a performance bottleneck for mobile ad hoc networks. IEEE 802.11 has been adopted as the current standard MAC protocol for ad hoc networks. However, it was developed without considering energy efficiency. To solve this problem, many modifications on IEEE 802.11 to incorporate power control have been proposed in the literature. The main idea of these power control schemes is to use a maximum possible power level for transmitting RTS/CTS and the lowest acceptable power for sending DATA/ACK. However, these schemes may degrade network throughput and reduce the overall energy efficiency of the network. This paper proposes autonomous power control MAC protocol (APCMP, which allows mobile nodes dynamically adjusting power level for transmitting DATA/ACK according to the distances between the transmitter and its neighbors. In addition, the power level for transmitting RTS/CTS is also adjustable according to the power level for DATA/ACK packets. In this paper, the performance of APCMP protocol is evaluated by simulation and is compared with that of other protocols. 6. Identification of node behavior for Mobile Ad-hoc Network Khyati Choure , Sanjay Sharma 2012-12-01 Full Text Available In present scenario, in ad-hoc network, the behavior of nodes are not very stable. They do not work properly and satisfactory. They are not cooperative and acting selfishly. They show their selfishness to share their resources like bandwidth to save life of battery, they are not hasitate to block thepackets sent by others for forwarding and transmit their own packets. Due to higher Mobility of the different nodes makes the situation even more complicated. Multiple routing protocols especially for these conditions have been developed during the last few years, to find optimized routes from a source to some destination.But it is still difficult to know the actual shortest path without attackers or bad nodes. Ad-hoc network suffer from the lot of issues i.e. congestion, Throughput, delay, security, network overhead. Packet delivery ratio is the issues of ongoing research. Cause of node failure may be either natural failure of node links or it may be due to act of an attacker or bad node which may degrade performance of network slowly or drastically, which also need to identify or determined. In this paper, we identify the good and bad nodes. A simulation has been performed to achieve better performance of modified AODV. Good result has been obtained in terms of Throughout, Packet Delivery Ratio. Realp Marc 2005-01-01 Full Text Available In mobile ad hoc radio networks, mechanisms on how to access the radio channel are extremely important in order to improve network efficiency. In this paper, the load adaptable medium access control for ad hoc networks (LAMAN protocol is described. LAMAN is a novel decentralized multipacket MAC protocol designed following a cross-layer approach. Basically, this protocol is a hybrid CDMA-TDMA-based protocol that aims at throughput maximization in multipacket communication environments by efficiently combining contention and conflict-free protocol components. Such combination of components is used to adapt the nodes' access priority to changes on the traffic load while, at the same time, accounting for the multipacket reception (MPR capability of the receivers. A theoretical analysis of the system is developed presenting closed expressions of network throughput and packet delay. By simulations the validity of our analysis is shown and the performances of a LAMAN-based system and an Aloha-CDMA-based one are compared. 8. Securing Zone Routing Protocol in Ad-Hoc Networks Ibrahim S. I. Abuhaiba 2012-09-01 Full Text Available This paper is a contribution in the field of security analysis on mobile ad-hoc networks, and security requirements of applications. Limitations of the mobile nodes have been studied in order to design a secure routing protocol that thwarts different kinds of attacks. Our approach is based on the Zone Routing Protocol (ZRP; the most popular hybrid routing protocol. The importance of the proposed solution lies in the fact that it ensures security as needed by providing a comprehensive architecture of Secure Zone Routing Protocol (SZRP based on efficient key management, secure neighbor discovery, secure routing packets, detection of malicious nodes, and preventing these nodes from destroying the network. In order to fulfill these objectives, both efficient key management and secure neighbor mechanisms have been designed to be performed prior to the functioning of the protocol.To validate the proposed solution, we use the network simulator NS-2 to test the performance of secure protocol and compare it with the conventional zone routing protocol over different number of factors that affect the network. Our results evidently show that our secure version paragons the conventional protocol in the packet delivery ratio while it has a tolerable increase in the routing overhead and average delay. Also, security analysis proves in details that the proposed protocol is robust enough to thwart all classes of ad-hoc attacks. 9. Distributed intrusion detection for mobile ad hoc networks Yi Ping; Jiang Xinghao; Wu Yue; Liu Ning 2008-01-01 Mobile ad hoc networking(MANET)has become an exciting and important technology in recent years,because of the rapid proliferation of wireless devices.Mobile ad hoc networks is highly vulnerable to attacks due to the open medium,dynamically changing network topology,cooperative algorithms,and lack of centralized monitoring and management point.The traditional way of protecting networks with firewalls and encryption software is no longer sufficient and effective for those features.A distributed intrusion detection approach based on timed automata is given.A cluster-based detection scheme is presented,where periodically a node is elected as the monitor node for a cluster.These monitor nodes can not only make local intrusion detection decisions,but also cooperatively take part in global intrusion detection.And then the timed automata is constructed by the way of manually abstracting the correct behaviours of the node according to the routing protocol of dynamic source routing(DSR).The monitor nodes can verify the behaviour of every nodes by timed automata,and validly detect real-time attacks without signatures of intrusion or trained data.Compared with the architecture where each node is its own IDS agent,the approach is much more efficient while maintaining the same level of effectiveness.Finally,the intrusion detection method is evaluated through simulation experiments. Leith, Alex 2012-05-01 In this paper, we study the distributed-duality-based optimization of a multisubchannel ad hoc cognitive radio network (CRN) that coexists with a multicell primary radio network (PRN). For radio resource allocation in multiuser orthogonal frequency-division multiplexing (MU-OFDM) systems, the orthogonal-access-based exclusive subchannel assignment (ESA) technique has been a popular method, but it is suboptimal in ad hoc networks, because nonorthogonal access between multiple secondary-user links by using shared subchannel assignment (SSA) can bring a higher weighted sum rate. We utilize the Lagrangian dual composition tool and design low-complexity near-optimal SSA resource allocation methods, assuming practical discrete-rate modulation and that the CRN-to-PRN interference constraint has to strictly be satisfied. However, available SSA methods for CRNs are either suboptimal or involve high complexity and suffer from slow convergence. To address this problem, we design fast-convergence SSA duality schemes and introduce several novel methods to increase the speed of convergence and to satisfy various system constraints with low complexity. For practical implementation in ad hoc CRNs, we design distributed-duality schemes that involve only a small number of CRN local information exchanges for dual update. The effects of many system parameters are presented through simulation results, which show that the near-optimal SSA duality scheme can perform significantly better than the suboptimal ESA duality and SSA-iterative waterfilling schemes and that the performance loss of the distributed schemes is small, compared with their centralized counterparts. © 2012 IEEE. 11. Studies on urban vehicular ad-hoc networks Zhu, Hongzi 2013-01-01 With the advancement of wireless technology, vehicular ad hoc networks (VANETs) are emerging as a promising approach to realizing 'smart cities' and addressing many important transportation problems such as road safety, efficiency, and convenience.This brief provides an introduction to the large trace data set collected from thousands of taxis and buses in Shanghai, the largest metropolis in China. It also presents the challenges, design issues, performance modeling and evaluation of a wide spectrum of VANET research topics, ranging from realistic vehicular mobility models and opportunistic ro 12. An Efficient Routing Algorithm in Ad Hoc Networks WANGShuqiao; LIHongyan; LIJiandong 2005-01-01 The Dynamic source routing protocol(DSR) is an on-demand routing protocol, designed specifically for use in multi-hop wireless ad hoc networks of mobile nodes. In this paper, some mechanisms such as the route's lifetime prediction, route’s creation time and an adaptive gratuitous route reply mode are introduced into DSR to get an efficient routing algorithm which is refered to E-DSR. The simulation results show that E-DSR can improve the packet delivery rate and reduce the routing overhead compared with hop-based DSR. 13. Connectivity analysis of one-dimensional ad-hoc networks Bøgsted, Martin; Rasmussen, Jakob Gulddahl; Schwefel, Hans-Peter 2011-01-01 Application and communication protocols in dynamic ad-hoc networks are exposed to physical limitations imposed by the connectivity relations that result from mobility. Motivated by vehicular freeway scenarios, this paper analyzes a number of important connectivity metrics for instantaneous...... snapshots of stochastic geographic movement patterns: (1) The single-hop connectivity number, corresponding to the number of single-hop neighbors of a mobile node; (2) the multi-hop connectivity number, expressing the number of nodes reachable via multi-hop paths of arbitrary hop-count; (3) the connectivity... 14. Recovery from Wormhole Attack in Mobile Ad Hoc Network (MANET) JI Xiao-jun; TIAN Chang; ZHANG Yu-sen 2006-01-01 Wormhole attack is a serious threat against MANET (mobile ad hoc network) and its routing protocols.A new approach-tunnel key node identification (TKNI) was proposed. Based on tunnel-key-node identification and priority-based route discovery, TKNI can rapidly rebuild the communications that have been blocked by wormhole attack. Compared to previous approaches, the proposed approach aims at both static and dynamic topology environment, involves addressing visible and invisible wormhole attack modes, requires no extra hardware, has a low overhead, and can be easily applied to MANET. 15. Complex Threshold Key Management for Ad Hoc Network GUO Wei; XIONG Zhong-wei; LI Zhi-tang 2005-01-01 A complex threshold key management framework has been proposed, which can address the challenges posed by the unique nature of Ad hoc network. Depending on the cooperation of the controller and participation nodes, this scheme should be efficient in the operation environmental alteration and tolerant faults of node, which take the advantages of the benefits of both key management approaches and alleviate their limitations. For the cooperation of the controller and participation nodes, a (t,n) threshold Elliptic curve sign-encryption scheme with the specified receiver also has been proposed. Using this threshold signencryption scheme, the key management distributes the trust between a controller and a set of participation nodes. 16. PERFORMANCE COMPARISON OF MOBILE AD HOC NETWORK ROUTING PROTOCOLS Mandeep Kaur Gulati 2014-03-01 Full Text Available Mobile Ad-hoc Network (MANET is an infrastructure less and decentralized network which need a robust dynamic routing protocol. Many routing protocols for such networks have been proposed so far to find optimized routes from source to the destination and prominent among them are Dynamic Source Routing (DSR, Ad-hoc On Demand Distance Vector (AODV, and Destination-Sequenced Distance Vector (DSDV routing protocols. The performance comparison of these protocols should be considered as the primary step towards the invention of a new routing protocol. This paper presents a performance comparison of proactive and reactive routing protocols DSDV, AODV and DSR based on QoS metrics (packet delivery ratio, average end-to-end delay, throughput, jitter, normalized routing overhead and normalized MAC overhead by using the NS-2 simulator. The performance comparison is conducted by varying mobility speed, number of nodes and data rate. The comparison results show that AODV performs optimally well not the best among all the studied protocols. 17. Evaluating And Comparison Of Intrusion In Mobile AD HOC Networks Zougagh Hicham 2012-04-01 Full Text Available In recent years, the use of mobile ad hoc network (MANETs has been widespread in many applications.Due to its deployment nature, MANETs are more vulnerable to malicious attack. The absolute security in the mobile ad hoc network is very hard to achieve because of its fundamental characteristics, such as dynamic topology, open medium, absence of infrastructure, limited power and limited bandwidth. The Prevention methods like authentication and cryptography techniques alone are not able to provide the security to these types of networks. However, these techniques have a limitation on the effects of prevention techniques in general and they are designed for a set of known attacks. They are unlikely to prevent newer attacks that are designed for circumventing the existing security measures. For this reason, there is a need of second mechanism to “detect and response” these newer attacks. Therefore, efficient intrusion detection must be deployed to facilitate the identification and isolation of attacks. In this article we classify the architecture for IDS that have so far been introduced for MANETs, and then existing intrusion detection techniques in MANETs presented and compared. We then provide some directions for future researches. 18. SEMAN: A Novel Secure Middleware for Mobile Ad Hoc Networks Eduardo da Silva 2016-01-01 Full Text Available As a consequence of the particularities of Mobile Ad Hoc Networks (MANETs, such as dynamic topology and self-organization, the implementation of complex and flexible applications is a challenge. To enable the deployment of these applications, several middleware solutions were proposed. However, these solutions do not completely consider the security requirements of these networks. Based on the limitations of the existing solutions, this paper presents a new secure middleware, called Secure Middleware for Ad Hoc Networks (SEMAN, which provides a set of basic and secure services to MANETs aiming to facilitate the development of distributed, complex, and flexible applications. SEMAN considers the context of applications and organizes nodes into groups, also based on these contexts. The middleware includes three modules: service, processing, and security. Security module is the main part of the middleware. It has the following components: key management, trust management, and group management. All these components were developed and are described in this paper. They are supported by a cryptographic core and behave according to security rules and policies. The integration of these components provides security guarantees against attacks to the applications that usethe middleware services. 19. Distributed Intrusion Detection System for Ad hoc Mobile Networks 2012-01-01 Full Text Available In mobile ad hoc network resource restrictions on bandwidth, processing capabilities, battery life and memory of mobile devices lead tradeoff between security and resources consumption. Due to some unique properties of MANETs, proactive security mechanism like authentication, confidentiality, access control and non-repudiation are hard to put into practice. While some additional security requirements are also needed, like cooperation fairness, location confidentiality, data freshness and absence of traffic diversion. Traditional security mechanism i.e. authentication and encryption, provide a security beach to MANETs. But some reactive security mechanism is required who analyze the routing packets and also check the overall network behavior of MANETs. Here we propose a local-distributed intrusion detection system for ad hoc mobile networks. In the proposed distributed-ID, each mobile node works as a smart agent. Data collect by node locally and it analyze that data for malicious activity. If any abnormal activity discover, it informs the surrounding nodes as well as the base station. It works like a Client-Server model, each node works in collaboration with server, updating its database each time by server using Markov process. The proposed local distributed- IDS shows a balance between false positive and false negative rate. Re-active security mechanism is very useful in finding abnormal activities although proactive security mechanism present there. Distributed local-IDS useful for deep level inspection and is suited with the varying nature of the MANETs. Hossain, M. Julius; Dewan, M. Ali Akber; Chae, Oksam 1. Vehicular Ad Hoc and Sensor Networks: Principles and Challenges 2011-06-01 Full Text Available The rapid increase of vehicular traffic and congest ion on the highways began hampering the safe and efficient movement of traffic. Consequently, year b y year, we see the ascending rate of car accidents and casualties in most of the countries. Therefore, exp loiting the new technologies, e.g. wireless sensor networks, is required as a solution of reduction of these sad dening and reprehensible statistics. This has motiv ated us to propose a novel and comprehensive system to utilize Wireless Sensor Networks for vehicular networks. W e coin the vehicular network employing wireless Senso r networks as Vehicular Ad Hoc and Sensor Network, or VASNET in short. The proposed VASNET is particularl y for highway traffic .VASNET is a self-organizing Ad Hoc and sensor network comprised of a large number of sensor nodes. In VASNET there are two kinds of sensor nodes, some are embedded on the vehicles-veh icular nodes- and others are deployed in predetermi ned distances besides the highway road, known as Road S ide Sensor nodes (RSS. The vehicular nodes are use d to sense the velocity of the vehicle for instance. We can have some Base Stations (BS such as Police Tra ffic Station, Firefighting Group and Rescue Team. The ba se stations may be stationary or mobile. VASNET provides capability of wireless communication betwe en vehicular nodes and stationary nodes, to increas e safety and comfort for vehicles on the highway road s. In this paper we explain main fundamentals and challenges of VASNET 2. Probabilistic Models and Process Calculi for Mobile Ad Hoc Networks Song, Lei Due to the wide use of communicating mobile devices, mobile ad hoc networks (MANETs) have gained in popularity in recent years. In order that the devices communicate properly, many protocols have been proposed working at different levels. Devices in an MANET are not stationary but may keep moving......, thus the network topology may undergo constant changes. Moreover the devices in an MANET are loosely connected not depending on pre-installed infrastructure or central control components, they exchange messages via wireless connections which are less reliable compared to wired connections. Therefore...... issues in MANETs e.g. mobility and unreliable connections. Specially speaking, 1. We first propose a discrete probabilistic process calculus with which we can model in an MANET that the wireless connection is not reliable, and the network topology may undergo changes. We equip each wireless connection... 3. Vehicular ad hoc networks standards, solutions, and research Molinaro, Antonella; Scopigno, Riccardo 2015-01-01 This book presents vehicular ad-hoc networks (VANETs) from the their onset, gradually going into technical details, providing a clear understanding of both theoretical foundations and more practical investigation. The editors gathered top-ranking authors to provide comprehensiveness and timely content; the invited authors were carefully selected from a list of who’s who in the respective field of interest: there are as many from Academia as from Standardization and Industry sectors from around the world. The covered topics are organized around five Parts starting from an historical overview of vehicular communications and standardization/harmonization activities (Part I), then progressing to the theoretical foundations of VANETs and a description of the day-one standard-compliant solutions (Part II), hence going into details of vehicular networking and security (Part III) and to the tools to study VANETs, from mobility and channel models, to network simulators and field trial methodologies (Part IV), and fi... 4. Integrating Mobile Ad Hoc Network to the Internet WANG Mao-ning 2005-01-01 A novel scheme is presented to integrate mobile ad hoc networks (MANETs) with the Internet and support mobility across wireless local area networks (WLANs) and MANETs. The mobile nodes, connected as a MANET, employ the optimize d link state routing (OLSR) protocol for routing within the MANET. Mobility management across WLANs and MANETs is achieved through the hierarchical mobile IPv6 (HMIPv6) protocol. The performance is evaluated on a HMIPv6 based test-bed composed of WLANs and MANETs. The efficiency gain obtained from using HMIPv6 in such a hybrid network is investigated. The investigation result shows that the use of HMIPv6 can achieve up to 27% gain on reducing the handoff latency when a mobile roams within a domain. Concerning the reduction of the signaling load on the Internet, the use of HMIPv6 can achieve at least a 54% gain and converges to 69%. Dobre, Ciprian 2012-01-01 Mobile Advertisement is a location-aware dissemination solution built on top of a vehicular ad-hoc network. We envision a network of WiFi access points that dynamically disseminate data to clients running on the car's smart device. The approach can be considered an alternative to the static advertisement billboards and can be useful to business companies wanting to dynamically advertise their products and offers to people driving their car. The clients can subscribe to information based on specific topics. We present design solutions that use access points as emitters for transmitting messages to wireless-enabled devices equipped on vehicles. We also present implementation details for the evaluation of the proposed solution using a simulator designed for VANET application. The results show that the application can be used for transferring a significant amount of data even under difficult conditions, such as when cars are moving at increased speeds, or the congested Wi-Fi network causes significant packet loss... 6. Congestion Reduction Using Ad hoc Message Dissemination in Vehicular Networks Hewer, Thomas D 2008-01-01 Vehicle-to-vehicle communications can be used effectively for intelligent transport systems (ITS) and location-aware services. The ability to disseminate information in an ad-hoc fashion allows pertinent information to propagate faster through the network. In the realm of ITS, the ability to spread warning information faster and further is of great advantage to the receivers of this information. In this paper we propose and present a message-dissemination procedure that uses vehicular wireless protocols for influencing traffic flow, reducing congestion in road networks. The computational experiments presented in this paper show how an intelligent driver model (IDM) and car-following model can be adapted to 'react' to the reception of information. This model also presents the advantages of coupling together traffic modelling tools and network simulation tools. 7. TRUSTWORTHY ENABLED RELIABLE COMMUNICATION ARCHITECTURE IN MOBILE AD HOC NETWORK Saravanan Dhavamani 2014-01-01 Full Text Available Ad hoc networks are widely used in military and other scientific area. There are various kind of routing protocols are available to establish the route, with the proper analyzation one can choose the routing protocol to form their own network with respect to number of nodes and security considerations. The mobility of nodes makes the environment infrastructure less. It also has a certain number of characteristics which makes the security difficult. A trust recommendation mechanism has designed to keep track of node’s behavior to establish the trustworthiness of the network. Meanwhile with this trustworthiness a node can make objective judgment among another node’s trustworthiness to maintain whole system at a certain security level. The motivation of the work is to understanding the behavior or routing protocol and the trustworthiness. 8. A Holistic Approach to Information Distribution in Ad Hoc Networks Casetti, Claudio; Fiore, Marco; La, Chi-Anh; Michiardi, Pietro 2009-01-01 We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication/drop strategy to adapt the number of inform... 9. Comparative Study of Cooperation Tools for Mobile Ad Hoc Networks J. Molina-Gil 2016-01-01 Full Text Available Mobile ad hoc networks are formed spontaneously to use the wireless medium for communication among nodes. Each node in this type of network is its own authority and has an unpredictable behaviour. These features involve a cooperation challenge that has been addressed in previous proposals with methods based on virtual currencies. In this work, those methods have been simulated in NS-2 and the results have been analyzed, showing several weaknesses. In particular, it has been concluded that existent methods do not provide significant advances compared with networks without any mechanism for promoting cooperation. Consequently, this work presents three new proposals that try to solve those problems. The obtained results show that the new proposals offer significant improvements over previous schemes based on virtual currencies. 10. Secure Routing and Data Transmission in Mobile Ad Hoc Networks Waleed S. Alnumay 2014-01-01 Full Text Available In this paper, we present an identity (ID based protocol that secures AODV and TCP so that it can be used in dynamic and attack prone environments of mobile ad hoc networks. The proposed protocol protects AODV using Sequential Aggregate Signatures (SAS based on RSA. It also generates a session key for each pair of source-destination nodes of a MANET for securing the end-to-end transmitted data. Here each node has an ID which is evaluated from its public key and the messages that are sent are authenticated with a signature/ MAC. The proposed scheme does not allow a node to change its ID throughout the network lifetime. Thus it makes the network secure against attacks that target AODV and TCP in MANET. We present performance analysis to validate our claim. 11. Topology for efficient information dissemination in ad-hoc networking Jennings, E.; Okino, C. M. 2002-01-01 In this paper, we explore the information dissemination problem in ad-hoc wirless networks. First, we analyze the probability of successful broadcast, assuming: the nodes are uniformly distributed, the available area has a lower bould relative to the total number of nodes, and there is zero knowledge of the overall topology of the network. By showing that the probability of such events is small, we are motivated to extract good graph topologies to minimize the overall transmissions. Three algorithms are used to generate topologies of the network with guaranteed connectivity. These are the minimum radius graph, the relative neighborhood graph and the minimum spanning tree. Our simulation shows that the relative neighborhood graph has certain good graph properties, which makes it suitable for efficient information dissemination. 12. Two Algorithms for Network Size Estimation for Master/Slave Ad Hoc Networks Ali, Redouane; Rio, Miguel 2009-01-01 This paper proposes an adaptation of two network size estimation methods: random tour and gossip-based aggregation to suit master/slave mobile ad hoc networks. We show that it is feasible to accurately estimate the size of ad hoc networks when topology changes due to mobility using both methods. The algorithms were modified to account for the specific constraints of master/slave ad hoc networks and the results show that the proposed modifications perform better on these networks than the original protocols. Each of the two algorithms presents strengths and weaknesses and these are outlined in this paper. 13. Location Based Opportunistic Routing Protocol for Mobile Ad Hoc Networks Jubin Sebastian E 2012-01-01 Full Text Available : Most existing ad hoc routing protocols are susceptible to node mobility, especially for large-scale networks. This paper proposes a Location Based Opportunistic Routing Protocol (LOR to addresses the problem of delivering data packets for highly dynamic mobile ad hoc networks in a reliable and timely manner.This protocol takes advantage of the stateless property of geographic routing and the broadcast nature of wireless medium. When a data packet is sent out, some of the neighbor nodes that have overheard the transmission will serve as forwarding candidates, and take turn to forward the packet if it is not relayed by the specific best forwarder within a certain period of time. By utilizing such in-the-air backup, communication is maintained without being interrupted. The additional latency incurred by local route recovery is greatly reduced and the duplicate relaying caused by packet reroute is also decreased. Simulation results on NS2 verified the effectiveness of the proposed protocol with improvement in throughput by 28%. 14. Dynamic Carpooling Using Wireless Ad-Hoc Network Abhishek V. Potnis 2014-04-01 Full Text Available The increase in awareness about the conservation of natural resources in today’s world, has led to carpooling becoming a widely followed practice. Carpooling refers to sharing of a vehicle with other passengers, thus reducing the fuel costs endured by the passengers, had they traveled individually. Carpooling helps reduce fuel consumption, thereby helping conserve the natural resources. Conventional Carpooling Portals require the users to register themselves and input their desired departure and destination points. These portals maintain a database of the users and map them with the users, who have registered for their cars to be pooled. This paper proposes a decentralized method of dynamic carpooling using the Wireless Ad-hoc Network, instead of having the users to register on a Web Portal. The user’s device, requesting for the carpool service can directly talk to another user’s device, providing the pooled car, using the Wireless Ad-hoc Network, thereby eliminating the need for the user to connect to the internet to access the web portal. 15. Reliable Multicast Error Recovery Algorithm in Ad-Hoc Networks Tariq Abdullah 2012-11-01 Full Text Available Mobile ad hoc network is an autonomous system of mobile nodes characterized by wireless links. The major challenge in ad hoc networks lies in adapting multicast communication to several environments, where mobility is unlimited and failures are frequent. Reliable multicast delivery requires a multicast message to be received by all mobile nodes in the communication group. The recovery mechanism requires feedback messages from each one of the receivers. In the tree-based recovery protocols, a group of nodes into recovery regions designate a forwarding node per region for retransmitting lost messages. In this study, local error recovery algorithm is applied within these relatively smaller regions, where the repaired packets are retransmitted only to the requested receivers in the local group. These receivers create a sub group from the local group which itself is a subgroup of the global multicast group. By applying local error recovery algorithm, the number of duplicated packets, due to packets retransmission, decreases which lead to improving the system performance. Simulation results demonstrate the scalability of the proposed algorithm in comparison to Source Tree Reliable Multicast (STRM protocol. The algorithm achieved up to 2.33% improvement on the percentage of duplicated packets in stable mobility speed without incurring any further communication or intense computation overhead. 16. SPM: Source Privacy for Mobile Ad Hoc Networks Ren Jian 2010-01-01 Full Text Available Source privacy plays a key role in communication infrastructure protection. It is a critical security requirement for many mission critical communications. This is especially true for mobile ad hoc networks (MANETs due to node mobility and lack of physical protection. Existing cryptosystem-based techniques and broadcasting-based techniques cannot be easily adapted to MANET because of their extensive cryptographic computation and/or large communication overhead. In this paper, we first propose a novel unconditionally secure source anonymous message authentication scheme (SAMAS. This scheme enables message sender to transmit messages without relying on any trusted third parties. While providing source privacy, the proposed scheme can also provide message content authenticity. We then propose a novel communication protocol for MANET that can ensure communication privacy for both message sender and message recipient. This protocol can also protect end-to-end routing privacy. Our security analysis demonstrates that the proposed protocol is secure against various attacks. The theoretical analysis and simulation show that the proposed scheme is efficient and can provide high message delivery ratio. The proposed protocol can be used for critical infrastructure protection and secure file sharing in mobile ad hoc networks where dynamic groups can be formed. 17. Trust recovery model of Ad Hoc network based on identity authentication scheme Liu, Jie; Huan, Shuiyuan 2017-05-01 Mobile Ad Hoc network trust model is widely used to solve mobile Ad Hoc network security issues. Aiming at the problem of reducing the network availability caused by the processing of malicious nodes and selfish nodes in mobile Ad Hoc network routing based on trust model, an authentication mechanism based on identity authentication mobile Ad Hoc network is proposed, which uses identity authentication to identify malicious nodes, And trust the recovery of selfish nodes in order to achieve the purpose of reducing network congestion and improving network quality. The simulation results show that the implementation of the mechanism can effectively improve the network availability and security. 18. Reliability analysis of cluster-based ad-hoc networks Cook, Jason L. [Quality Engineering and System Assurance, Armament Research Development Engineering Center, Picatinny Arsenal, NJ (United States); Ramirez-Marquez, Jose Emmanuel [School of Systems and Enterprises, Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 (United States)], E-mail: [email protected] 2008-10-15 The mobile ad-hoc wireless network (MAWN) is a new and emerging network scheme that is being employed in a variety of applications. The MAWN varies from traditional networks because it is a self-forming and dynamic network. The MAWN is free of infrastructure and, as such, only the mobile nodes comprise the network. Pairs of nodes communicate either directly or through other nodes. To do so, each node acts, in turn, as a source, destination, and relay of messages. The virtue of a MAWN is the flexibility this provides; however, the challenge for reliability analyses is also brought about by this unique feature. The variability and volatility of the MAWN configuration makes typical reliability methods (e.g. reliability block diagram) inappropriate because no single structure or configuration represents all manifestations of a MAWN. For this reason, new methods are being developed to analyze the reliability of this new networking technology. New published methods adapt to this feature by treating the configuration probabilistically or by inclusion of embedded mobility models. This paper joins both methods together and expands upon these works by modifying the problem formulation to address the reliability analysis of a cluster-based MAWN. The cluster-based MAWN is deployed in applications with constraints on networking resources such as bandwidth and energy. This paper presents the problem's formulation, a discussion of applicable reliability metrics for the MAWN, and illustration of a Monte Carlo simulation method through the analysis of several example networks. 19. A Distributed Mutual Exclusion Algorithm for Mobile Ad Hoc Networks Orhan Dagdeviren 2012-04-01 Full Text Available We propose a distributed mutual exclusion algorithm for mobile ad hoc networks. This algorithm requires a ring of cluster coordinators as the underlying topology. The topology is built by first providing clusters of mobile nodes in the first step and then forming a backbone consisting of the cluster heads in a ring as the second step. The modified version of the Ricart-Agrawala Algorithm on top of this topologyprovides analytically and experimentally an order of decrease in message complexity with respect to the original algorithm. We analyze the algorithm, provide performance results of the implementation, discuss the fault tolerance and the other algorithmic extensions, and show that this architecture can be used for other middleware functions in mobile networks. JingZheng; JinshuSu; KanYang 2004-01-01 In mobile ad hoc networks (MANET), nodes move freely and the distribution of access requests changes dynamically. Replica allocation in such a dynamic environment is a significant challenge. In this paoer, a dynamic adaptive replica allocation algorithm that can adapt to the nodes motion is proposed to minimize the communication cost of object access. When changes occur in the access requests of the object or the network topology, each replica node collects access requests from its neighbors and makes decisions locally to expand replica to neighbors or to relinquish the replica. The algorithm dynamically adapts the replica allocation scheme to a local optimal one. Simulation results show that our algorithms efficiently reduce the communication cost of object access in MANET environment. 1. FDAN: Failure Detection Protocol for Mobile Ad Hoc Networks Benkaouha, Haroun; Abdelli, Abdelkrim; Bouyahia, Karima; Kaloune, Yasmina This work deals with fault tolerance in distributed MANET (Mobile Ad hoc Networks) systems. However, the major issue for a failure detection protocol is to confound between a fault and a voluntary or an involuntary disconnection of nodes, and therefore to suspect correct nodes to be failing and conversely. Within this context, we propose in this paper a failure detection protocol that copes with MANET systems constraints. The aim of this work is to allow to the system to launch recovery process. For this effect, our protocol, called FDAN, is based on the class of heartbeat protocols. It takes into account: no preliminary knowledge of the network, the nodes disconnection and reconnection, resources limitation...Hence, we show that by using temporary lists and different timeout levels, we achieve to reduce sensibly the number of false suspicions. 2. Capacity Scaling of Ad Hoc Networks with Spatial Diversity Hunter, Andrew M; Weber, Steven P 2007-01-01 This paper derives the exact outage probability and transmission capacity of ad hoc wireless networks with nodes employing multiple antenna diversity techniques. The analysis enables a direct comparison of the number of simultaneous transmissions achieving a certain data rate under different diversity techniques. Preliminary results derive the outage probability and transmission capacity for a general class of signal distributions which facilitates quantifying the gain for fading or non-fading environments. The transmission capacity is then given for uniformly random networks with path loss exponent $\\alpha>2$ in which nodes: (1) use static beamforming through $M$ sectorized antennas for which the gain is shown to be $\\Theta(M^2)$ if the antennas are without sidelobes, but less in the event of a nonzero sidelobe level; (2) dynamic eigen-beamforming (maximal ratio transmission/combining) in which the gain is shown to be $\\Theta(M^{\\frac{2}{\\alpha}})$; (3) various transmit antenna selection and receive antenna ... 3. A Survey: variants of TCP in Ad-hoc networks Komal Zaman 2013-11-01 Full Text Available MANET (Mobile Ad-hoc network forms a temporary network of wireless mobile nodes without any infrastructure where all nodes are allowed to move freely, configure themselves and interconnect with its neighbors to perform peer to peer communication and transmission. TCP (Transmission Control Protocol offers reliable, oriented connection and mechanism of end to end delivery. This article provides the review and comparison of existing variants of TCP for instance: The TCP Tahoe, The TCP Reno, The TCP New Reno, The Lite, The Sack, The TCP Vegas, Westwood and The TCP Fack. TCP’s performance depends on the type of its variants due to missing of congestion control or improper activation procedures such as Slow Start, Fast Retransmission, and Congestion Avoidance, Retransmission, Fast Recovery, Selective Acknowledgement mechanism and Congestion Control. This analysis is essential to be aware about a better TCP implementation for a specific scenario and then nominated a suitable one. 4. M-BOARD IN AN AD-HOC NETWORK ENVIRONMENT Sharon Panth 2013-04-01 Full Text Available Notice Board is very essential part of any organization. This paper presents the design and implementation of M-Board (Mobile Notice Board for Ad-hoc Network Environment that can be established and made available for an educational or industry environment. The cost-free communication among the mobile phone clients and server takes place with the help of Bluetooth wireless technology. M-Board is particularly developed as an informative application environment to provide the basic information like daily events or timetable to the users. The design is based on the amalgamation of Java ME with other technologies like Java SE, Java EE, PHP and MySQL. The system is designed to provide simple, easy-to-use, cost-free solution in a ubiquitous environment. The system design is easily implemented and extensible allowing the number of clients in Personal Area Network (PAN for information exchange with the hotspot-server. 5. Collaboration Layer for Robots in Mobile Ad-hoc Networks Borch, Ole; Madsen, Per Printz; Broberg, Jacob Honor´e 2009-01-01 In many applications multiple robots in Mobile Ad-hoc Networks are required to collaborate in order to solve a task. This paper shows by proof of concept that a Collaboration Layer can be modelled and designed to handle the collaborative communication, which enables robots in small to medium size...... networks to solve tasks collaboratively. In this proposal the Collaboration Layer is modelled to handle service and position discovery, group management, and synchronisation among robots, but the layer is also designed to be extendable. Based on this model of the Collaboration Layer, generic services....... A prototype of the Collaboration Layer has been developed to run in a simulated environment and tested in an evaluation scenario. In the scenario five robots solve the tasks of vacuum cleaning and entrance guarding, which involves the ability to discover potential co-workers, form groups, shift from one group... 6. Parameterized Verification of Safety Properties in Ad Hoc Network Protocols Giorgio Delzanno 2011-08-01 Full Text Available We summarize the main results proved in recent work on the parameterized verification of safety properties for ad hoc network protocols. We consider a model in which the communication topology of a network is represented as a graph. Nodes represent states of individual processes. Adjacent nodes represent single-hop neighbors. Processes are finite state automata that communicate via selective broadcast messages. Reception of a broadcast is restricted to single-hop neighbors. For this model we consider a decision problem that can be expressed as the verification of the existence of an initial topology in which the execution of the protocol can lead to a configuration with at least one node in a certain state. The decision problem is parametric both on the size and on the form of the communication topology of the initial configurations. We draw a complete picture of the decidability and complexity boundaries of this problem according to various assumptions on the possible topologies. 7. Malware-Propagative Mobile Ad Hoc Networks: Asymptotic Behavior Analysis Vasileios Karyotis; Anastasios Kakalis; Symeon Papavassiliou 2008-01-01 In this paper, the spreading of malicious software over ad hoe networks, where legitimate nodes are prone to propagate the infections they receive from either an attacker or their already infected neighbors, is analyzed. Considering the Susceptible-Infected-Susceptible (SIS) node infection paradigm we propose a probabilistic model, on the basis of the theory of closed queuing networks, that aims at describing the aggregated behavior of the system when attacked by malicious nodes. Because of its nature, the model is also able to deal more effectively with the stochastic behavior of attackers and the inherent probabilistic nature of the wireless environment. The proposed model is able to describe accurately the asymptotic behavior of malware-propagative large scale ad hoc networking environments. Using the Norton equivalent of the closed queuing network, we obtain analytical results for its steady state behavior, which in turn is used for identifying the critical parameters affecting the operation of the network. Finally, through modeling and simulation, some additional numerical results are obtained with respect to the behavior of the system when multiple attackers are present, and regarding the time-dependent evolution and impact of an attack. 8. Experimental Evaluation of the Usage of Ad Hoc Networks as Stubs for Multiservice Networks Miguel Almeida 2007-03-01 Full Text Available This paper describes an experimental evaluation of a multiservice ad hoc network, aimed to be interconnected with an infrastructure, operator-managed network. This network supports the efficient delivery of services, unicast and multicast, legacy and multimedia, to users connected in the ad hoc network. It contains the following functionalities: routing and delivery of unicast and multicast services; distributed QoS mechanisms to support service differentiation and resource control responsive to node mobility; security, charging, and rewarding mechanisms to ensure the correct behaviour of the users in the ad hoc network. This paper experimentally evaluates the performance of multiple mechanisms, and the influence and performance penalty introduced in the network, with the incremental inclusion of new functionalities. The performance results obtained in the different real scenarios may question the real usage of ad-hoc networks for more than a minimal number of hops with such a large number of functionalities deployed. 9. Experimental Evaluation of the Usage of Ad Hoc Networks as Stubs for Multiservice Networks Almeida Miguel 2007-01-01 Full Text Available This paper describes an experimental evaluation of a multiservice ad hoc network, aimed to be interconnected with an infrastructure, operator-managed network. This network supports the efficient delivery of services, unicast and multicast, legacy and multimedia, to users connected in the ad hoc network. It contains the following functionalities: routing and delivery of unicast and multicast services; distributed QoS mechanisms to support service differentiation and resource control responsive to node mobility; security, charging, and rewarding mechanisms to ensure the correct behaviour of the users in the ad hoc network. This paper experimentally evaluates the performance of multiple mechanisms, and the influence and performance penalty introduced in the network, with the incremental inclusion of new functionalities. The performance results obtained in the different real scenarios may question the real usage of ad-hoc networks for more than a minimal number of hops with such a large number of functionalities deployed. 10. A Smart Booster Approach In Wireless Ad Hoc Network 2016-02-01 Full Text Available Wireless Mobile Ad-hoc network is upcoming next generation technology. The foremost reason to be the popularity of MANET is its infrastructure less nature. MANET is a group of wireless mobile nodes which are connected wirelessly. Nodes may be highly mobile because the beauty of wireless network (like MANET or cellular system lies in mobility. But due to this mobility of nodes, the topology of the node and network changed frequently. This frequent change topology affect to the communication between nodes. If nodes are within the range of each other they can communicate properly but if nodes are not in the range of each other, communication will not be possible smoothly or even ongoing communication may be disrupt or lost. So there is a need to develop and design a mechanism or system that can handle such types of situation and prevent communication failure or frequent link failure. In the present work a novel booster mechanism approach is proposed to overcome such situation or Link failure. In the proposed Approach, the level of the Power at both the Transmitter as well as Receiver is measured in order to maintain communication smooth between the nodes. If one node is moving away from the communicating node then both moving node will measure its receiving power with respect to the distance and if its current power level reaches the threshold level it switched “ON” its Booster and at the same time it send a message to source node which contains received power level of moving node due to this ,that source node also “ON” its Booster and thus both nodes connect together to protect the link failure during that mobility. The Booster Approach is a novel concept in the direction of smooth communication in dynamic or wireless environment in Mobile Ad hoc Network. 11. Preferential survival in models of complex ad hoc networks Kong, Joseph S.; Roychowdhury, Vwani P. 2008-05-01 There has been a rich interplay in recent years between (i) empirical investigations of real-world dynamic networks, (ii) analytical modeling of the microscopic mechanisms that drive the emergence of such networks, and (iii) harnessing of these mechanisms to either manipulate existing networks, or engineer new networks for specific tasks. We continue in this vein, and study the deletion phenomenon in the web by the following two different sets of websites (each comprising more than 150,000 pages) over a one-year period. Empirical data show that there is a significant deletion component in the underlying web networks, but the deletion process is not uniform. This motivates us to introduce a new mechanism of preferential survival (PS), where nodes are removed according to the degree-dependent deletion kernel, D(k)∝k, with α≥0. We use the mean-field rate equation approach to study a general dynamic model driven by Preferential Attachment (PA), Double PA (DPA), and a tunable PS (i.e., with any α>0), where c nodes ( cdynamics reported in this work can be used to design and engineer stable ad hoc networks and explain the stability of the power-law exponents observed in real-world networks. 12. TCP Issues in Mobile Ad Hoc Networks: Challenges and Solutions Wei-Qiang Xu; Tie-Jun Wu 2006-01-01 Mobile ad hoc networks (MANETs) are a kind of very complex distributed communication systems with wireless mobile nodes that can be freely and dynamically self-organized into arbitrary and temporary network topologies. MANETs inherit several limitations of wireless networks, meanwhile make new challenges arising from the specificity of MANETs, such as route failures, hidden terminals and exposed terminals. When TCP is applied in a MANET environment, a number of tough problems have to be dealt with. In this paper, a comprehensive survey on this dynamic field is given. Specifically, for the first time all factors impairing TCP performance are identified based on network protocol hierarchy, I.e., lossy wireless channel at the physical layer; excessive contention and unfair access at the MAC layer; frail routing protocol at the network layer, the MAC layer and the network layer related mobile node; unfit congestion window size at the transport layer and the transport layer related asymmetric path. How these factors degrade TCP performance is clearly explained. Then, based on how to alleviate the impact of each of these factors listed above, the existing solutions are collected as comprehensively as possible and classified into a number of categories, and their advantages and limitations are discussed. Based on the limitations of these solutions, a set of open problems for designing more robust solutions is suggested. 13. Flying Ad-Hoc Networks: Routing Protocols, Mobility Models, Issues Muneer Bani Yassein 2016-06-01 Full Text Available Flying Ad-Hoc Networks (FANETs is a group of Unmanned Air Vehicles (UAVs which completed their work without human intervention. There are some problems in this kind of networks: the first one is the communication between (UAVs. Various routing protocols introduced classified into three categories, static, proactive, reactive routing protocols in order to solve this problem. The second problem is the network design, which depends on the network mobility, in which is the process of cooperation and collaboration between the UAV. Mobility model of FANET is introduced in order to solve this problem. In Mobility Model, the path and speed variations of the UAV and represents their position are defined. As of today, Random Way Point Model is utilized as manufactured one for Mobility in the greater part of recreation situations. The Arbitrary Way Point model is not relevant for the UAV in light of the fact that UAV do not alter their course and versatility, speed quickly at one time because of this reason, we consider more practical models, called Semi-Random Circular Movement (SRCM Mobility Model. Also, we consider different portability models, Mission Plan-Based (MPB Mobility Model, Pheromone-Based Model. Moreover, Paparazzi Mobility Model (PPRZM. This paper presented and discussed the main routing protocols and main mobility models used to solve the communication, cooperation, and collaboration in FANET networks. 14. Securing Mobile Ad hoc Networks:Key Management and Routing Chauhan, Kamal Kumar; 10.5121/ijans.2012.2207 2012-01-01 Secure communication between two nodes in a network depends on reliable key management systems that generate and distribute keys between communicating nodes and a secure routing protocol that establishes a route between them. But due to lack of central server and infrastructure in Mobile Ad hoc Networks (MANETs), this is major problem to manage the keys in the network. Dynamically changes in network's topology causes weak trust relationship among the nodes in the network. In MANETs a mobile node operates as not only end terminal but also as an intermediate router. Therefore, a multi-hop scenario occurs for communication in MANETs; where there may be one or more malicious nodes in between source and destination. A routing protocol is said to be secure that detects the detrimental effects of malicious node(s in the path from source to destination). In this paper, we proposed a key management scheme and a secure routing protocol that secures on demand routing protocol such as DSR and AODV. We assume that MANETs ... 15. Random Time Identity Based Firewall In Mobile Ad hoc Networks Suman, Patel, R. B.; Singh, Parvinder 2010-11-01 A mobile ad hoc network (MANET) is a self-organizing network of mobile routers and associated hosts connected by wireless links. MANETs are highly flexible and adaptable but at the same time are highly prone to security risks due to the open medium, dynamically changing network topology, cooperative algorithms, and lack of centralized control. Firewall is an effective means of protecting a local network from network-based security threats and forms a key component in MANET security architecture. This paper presents a review of firewall implementation techniques in MANETs and their relative merits and demerits. A new approach is proposed to select MANET nodes at random for firewall implementation. This approach randomly select a new node as firewall after fixed time and based on critical value of certain parameters like power backup. This approach effectively balances power and resource utilization of entire MANET because responsibility of implementing firewall is equally shared among all the nodes. At the same time it ensures improved security for MANETs from outside attacks as intruder will not be able to find out the entry point in MANET due to the random selection of nodes for firewall implementation. 16. An Efficient Channel Access Scheme for Vehicular Ad Hoc Networks 2017-01-01 Full Text Available Vehicular Ad Hoc Networks (VANETs are getting more popularity due to the potential Intelligent Transport Systems (ITS technology. It provides many efficient network services such as safety warnings (collision warning, entertainment (video and voice, maps based guidance, and emergency information. VANETs most commonly use Road Side Units (RSUs and Vehicle-to-Vehicle (V2V referred to as Vehicle-to-Infrastructure (V2I mode for data accessing. IEEE 802.11p standard which was originally designed for Wireless Local Area Networks (WLANs is modified to address such type of communication. However, IEEE 802.11p uses Distributed Coordination Function (DCF for communication between wireless nodes. Therefore, it does not perform well for high mobility networks such as VANETs. Moreover, in RSU mode timely provision of data/services under high density of vehicles is challenging. In this paper, we propose a RSU-based efficient channel access scheme for VANETs under high traffic and mobility. In the proposed scheme, the contention window is dynamically varied according to the times (deadlines the vehicles are going to leave the RSU range. The vehicles with shorter time deadlines are served first and vice versa. Simulation is performed by using the Network Simulator (NS-3 v. 3.6. The simulation results show that the proposed scheme performs better in terms of throughput, backoff rate, RSU response time, and fairness. Victor Gau 2010-01-01 Full Text Available We propose an idle probability-based broadcasting method, iPro, which employs an adaptive probabilistic mechanism to improve performance of data broadcasting over dense wireless ad hoc networks. In multisource one-hop broadcast scenarios, the modeling and simulation results of the proposed iPro are shown to significantly outperform the standard IEEE 802.11 under saturated condition. Moreover, the results also show that without estimating the number of competing nodes and changing the contention window size, the performance of the proposed iPro can still approach the theoretical bound. We further apply iPro to multihop broadcasting scenarios, and the experiment results show that within the same elapsed time after the broadcasting, the proposed iPro has significantly higher Packet-Delivery Ratios (PDR than traditional methods. 18. Realistic Mobility Modeling for Vehicular Ad Hoc Networks Akay, Hilal; Tugcu, Tuna 2009-08-01 Simulations used for evaluating the performance of routing protocols for Vehicular Ad Hoc Networks (VANET) are mostly based on random mobility and fail to consider individual behaviors of the vehicles. Unrealistic assumptions about mobility produce misleading results about the behavior of routing protocols in real deployments. In this paper, a realistic mobility modeling tool, Mobility for Vehicles (MOVE), which considers the basic mobility behaviors of vehicles, is proposed for a more accurate evaluation. The proposed model is tested against the Random Waypoint (RWP) model using AODV and OLSR protocols. The results show that the mobility model significantly affects the number of nodes within the transmission range of a node, the volume of control traffic, and the number of collisions. It is shown that number of intersections, grid size, and node density are important parameters when dealing with VANET performance. 19. Parallel routing in Mobile Ad-Hoc Networks Day, Khaled; Arafeh, Bassel; Alzeidi, Nasser; 10.5121/ijcnc.2011.3506 2011-01-01 This paper proposes and evaluates a new position-based Parallel Routing Protocol (PRP) for simultaneously routing multiple data packets over disjoint paths in a mobile ad-hoc network (MANET) for higher reliability and reduced communication delays. PRP views the geographical region where the MANET is located as a virtual 2-dimensional grid of cells. Cell-disjoint (parallel) paths between grid cells are constructed and used for building pre-computed routing tables. A single gateway node in each grid cell handles routing through that grid cell reducing routing overheads. Each node maintains updated information about its own location in the virtual grid using GPS. Nodes also keep track of the location of other nodes using a new proposed cell-based broadcasting algorithm. Nodes exchange energy level information with neighbors allowing energy-aware selection of the gateway nodes. Performance evaluation results have been derived showing the attractiveness of the proposed parallel routing protocol from different resp... 20. Opportunistic Channel Scheduling for Ad Hoc Networks with Queue Stability Dong, Lei; Wang, Yongchao 2015-03-01 In this paper, a distributed opportunistic channel access strategy in ad hoc network is proposed. We consider the multiple sources contend for the transmission opportunity, the winner source decides to transmit or restart contention based on the current channel condition. Owing to real data assumption at all links, the decision still needs to consider the stability of the queues. We formulate the channel opportunistic scheduling as a constrained optimization problem which maximizes the system average throughput with the constraints that the queues of all links are stable. The proposed optimization model is solved by Lyapunov stability in queueing theory. The successive channel access problem is decoupled into single optimal stopping problem at every frame and solved with Lyapunov algorithm. The threshold for every frame is different, and it is derived based on the instantaneous queue information. Finally, computer simulations are conducted to demonstrate the validity of the proposed strategy. Ray, Indrajit In this work, we report on one aspect of an autonomous robot-based digital evidence acquisition system that we are developing. When forensic investigators operate within a hostile environment they may use remotely operated unmanned devices to gather digital evidence. These systems periodically upload the evidence to a remote central server using a mobile ad hoc network. In such cases, large pieces of information need to be fragmented and transmitted in an appropriate manner. To support proper forensic analysis, certain properties must ensured for each fragment of evidence — confidentiality during communication, authenticity and integrity of the data, and, most importantly, strong evidence of membership for fragments. This paper describes a framework to provide these properties for the robot-based evidence acquisition system under development. 2. SECURITY CHALLENGES IN MOBILE AD HOC NETWORKS: A SURVEY Ali Dorri 2015-02-01 Full Text Available MANET is a kind of Ad Hoc network with mobile, wireless nodes. Because of its special characteristics like dynamic topology, hop-by-hop communications and easy and quick setup, MANET faced lots of challenges allegorically routing, security and clustering. The security challenges arise due to MANET’s selfconfiguration and self-maintenance capabilities. In this paper, we present an elaborate view of issues in MANET security. Based on MANET’s special characteristics, we define three security parameters for MANET. In addition we divided MANET security into two different aspects and discussed each one in details. A comprehensive analysis in security aspects of MANET and defeating approaches is presented. In addition, defeating approaches against attacks have been evaluated in some important metrics. After analyses and evaluations, future scopes of work have been presented. 3. Precise positioning systems for Vehicular Ad-Hoc Networks Mohamed, Samir A Elsagheer; Ansari, Gufran Ahmad 2012-01-01 Vehicular Ad Hoc Networks (VANET) is a very promising research venue that can offers many useful and critical applications including the safety applications. Most of these applications require that each vehicle knows precisely its current position in real time. GPS is the most common positioning technique for VANET. However, it is not accurate. Moreover, the GPS signals cannot be received in the tunnels, undergrounds, or near tall buildings. Thus, no positioning service can be obtained in these locations. Even if the Deferential GPS (DGPS) can provide high accuracy, but still no GPS converge in these locations. In this paper, we provide positioning techniques for VANET that can provide accurate positioning service in the areas where GPS signals are hindered by the obstacles. Experimental results show significant improvement in the accuracy. This allows when combined with DGPS the continuity of a precise positioning service that can be used by most of the VANET applications. 4. Enhancing the performance of ad hoc wireless networks with smart antennas 2006-01-01 A large portion of the network capacity of an ad hoc network can be wasted by the medium access mechanisms of omni-directional antennas. To overcome this problem, researchers propose the use of directional or adaptive antennas that largely reduce radio interference, improving the utilization of wireless medium and the resulting network throughput.Enhancing the Performance of Ad Hoc Wireless Networks with Smart Antennas discusses these issues and challenges. Following an introduction to ad hoc networks, it presents an overview of basic Media Access Control (MAC) and routing protocols in ad hoc 5. Cluster-based Intrusion Detection in Wireless Ad-Hoc Networks Di Wu; Zhisheng Liu; Yongxin Feng; Guangxing Wang 2004-01-01 There are inherent vulnerabilities that are not easily preventable in the mobile Ad-Hoc networks.To build a highly secure wireless Ad-Hoc network,intrusion detection and response techniques need to be deployed;The intrusion detection and cluster-based Ad-Hoc networks has been introduced,then,an architecture for better intrusion detection based on cluster using Data Mining in wireless Ad-Hoc networks has been shown. A statistical anomaly detection approach has been used.The anomaly detection and trace analysis have been done locally in each node and possibly through cooperation with clusterhead detection in the network. 6. Reliable Mobile Ad-Hoc Network Routing Using Firefly Algorithm D Jinil Persis 2016-05-01 Full Text Available Routing in Mobile Ad-hoc NETwork (MANET is a contemporary graph problem that is solved using various shortest path search techniques. The routing algorithms employed in modern routers use deterministic algorithms that extract an exact non-dominated set of solutions from the search space. The search efficiency of these algorithms is found to have an exponential time complexity in the worst case. Moreover this problem is a multi-objective optimization problem in nature for MANET and it is required to consider changing topology layout. This study attempts to employ a formulation incorporating objectives viz., delay, hop-distance, load, cost and reliability that has significant impact on network performance. Simulation with different random topologies has been carried out to illustrate the implementation of an exhaustive search algorithm and it is observed that the algorithm could handle small-scale networks limited to 15 nodes. A random search meta-heuristic that adopts the nature of firefly swarm has been proposed for larger networks to yield an approximated non-dominated path set. Firefly Algorithm is found to perform better than the exact algorithm in terms of scalability and computational time. 7. Enhancing Node Cooperation in Mobile Ad Hoc Network S. Kami Makki 2013-03-01 Full Text Available Mobile Ad Hoc Networks (MANET have been a research interest over the past few years, yet, node cooperation has continually been a recognized issue for researchers. Because of their lack of infrastructure, MANETS depend on the cooperation of intermediate nodes in order to forward or send packets of their own to other nodes in the network. Therefore, nodes located in the central area of the network are used more frequently than the nodes located on the outer boundary. The inner nodes have to forward the packets of other nodes and if there is no payoff for forwarding the packets, the nodes may start to refrain from forwarding the packets of others to save their energy. The Community Enforcement Mechanism has been proposed to force the cooperation of among the nodes and reduce their misbehavior. Although, it provides cooperation among the nodes, it does not essentially increase the network life. In this paper, we present an efficient algorithm to improve the longevity of a MANET based upon more structured nodes cooperation. 8. Design of the next generation cognitive mobile ad hoc networks Amjad, Ali; Wang, Huiqiang; Chen, Xiaoming Cognition capability has been seen by researchers as the way forward for the design of next generation of Mobile Ad Hoc Networks (MANETs). The reason why a cognitive paradigm would be more suited to a MANET is because MANETs are highly dynamic networks. The topology may change very frequently during the operation of a MANET. Traffic patterns in MANETs can vary from time to time depending on the need of the users. The size of a MANET and node density is also very dynamic and may change without any predictable pattern. In a MANET environment, most of these parameters may change very rapidly and keeping track of them manually would be very difficult. Previous studies have shown that the performance of a certain routing approach in MANETs is dependent on the size of the network and node density. The choice of whether to use a reactive or proactive routing approach comes down to the network size parameter. Static or offline approaches to fine tune a MANET to achieve certain performance goals is hence not very productive as a lot of these parameters keep changing during the course of operation of MANETs. Similarly, the performance of MANETs would improve greatly if the MAC layer entity could operate in a more flexible manner. In this paper we propose a cognitive MANET design that will ensure that all these dynamic parameters are automatically monitored and decisions are based on the current status of these parameters. 9. Data management issues in mobile ad hoc networks HARA, Takahiro 2017-01-01 Research on mobile ad hoc networks (MANETs) has become a hot research topic since the middle 1990’s. Over the first decade, most research focused on networking techniques, ignoring data management issues. We, however, realized early the importance of data management in MANETs, and have been conducting studies in this area for 15 years. In this review, we summarize some key technical issues related to data management in MANETs, and the studies we have done in addressing these issues, which include placement of data replicas, update management, and query processing with security management. The techniques proposed in our studies have been designed with deep considerations of MANET features including network partitioning, node participation/disappearance, limited network bandwidth, and energy efficiency. Our studies published in early 2000’s have developed a new research field as data management in MANETs. Also, our recent studies are expected to be significant guidelines of new research directions. We conclude the review by discussing some future directions for research. PMID:28496052 李俊; 李鑫; 黄红伟 2011-01-01 11. Extending Service Area of IEEE 802.11 Ad Hoc Networks Choi, Woo-Yong 2012-06-01 According to the current IEEE 802.11 wireless LAN standards, IEEE 802.11 ad hoc networks have the limitation that all STAs (Stations) are in the one-hop transmission range of each other. In this paper, to alleviate the limitation of IEEE 802.11 ad hoc networks we propose the efficient method for selecting the most appropriate pseudo AP (Access Point) from among the set of ad hoc STAs and extending the service area of IEEE 802.11 ad hoc networks by the pseudo AP's relaying the internal traffic of IEEE 802.11 ad hoc networks. Numerical examples show that the proposed method significantly extends the service area of IEEE 802.11 ad hoc networks. 2016-05-01 13. QUEUEING MODELS IN MOBILE AD-HOC NETWORKS 2014-01-01 Full Text Available The Mobile Ad-hoc Networks (MANETs has gain an essential part of the attention of researchers and become very well-liked in last few years. MANETs can operate with no fixed communications and can live rapid changes in the network topology. They can be studied officially as graphs in which the set of boundaries varies in time. One of the main methods to determine the presentation of MANETs is simulation. This study proposes Enhanced Probabilistic Adhoc on Demand Distance Vector (EPAODV routing protocol, which solves the broadcast storm problem of Adhoc on Demand Distance Vector (AODV. Our evaluation of MANETs is based on the evaluation of the throughput, end to end delay and packet delivery ratio. We evaluated the end to end delay as it is one of the most important characteristic evaluation metric in computer networks. In our proposed algorithm, using a queueing model M/M/C: ∞/FIFO, we are able to enhance that better results are obtained in the case EPAODV protocol such as increasing throughput, data delivery ratio and then decreasing the end delay compare to the existing protocols. 14. Utilization of AODV in Wireless Ad Hoc Networks 2007-01-01 Full Text Available AODV is a mature and widely accepted routing protocol for Mobile Ad hoc Networks (MANET, it has low processing and memory overhead and low network utilization, and works well even in high mobility situation. We modified AODV to use these dominating sets, resulting in the AODV-DS protocol. Our contribution in addressing the fragility of a minimum connected dominating set in the presence of mobility and cross-traffic. We develop three heuristics to fortify the dominating set process against loss by re-introducing some redundancy using a least-first set cover rather than a greedy set cover. AODV-DS exhibits about a 70% savings in RREQ traffic while maintaining the same or better latency and delivery ratio for 30 source nodes in a graph of 50 nodes. It was also about as fair as conventional AODV in distributing the RREQ burden among all nodes, except in cases of low-mobility and few source nodes. For low-mobility networks, it was not as fair to forwarding nodes as AODV, but better than AODV with Dominant Pruning (DP. 15. Metric-Based Cooperative Routing in Multihop Ad Hoc Networks Xin He 2012-01-01 Full Text Available Cooperative communication fully leverages the broadcast nature of wireless channels and exploits time/spatial diversity in a distributed manner, thereby achieving significant improvements in system capacity and transmission reliability. Cooperative diversity has been well studied from the physical layer perspective. Thereafter, cooperative MAC design has also drawn much attention recently. However, very little work has addressed cooperation at the routing layer. In this paper, we propose a simple yet efficient scheme for cooperative routing by using cooperative metrics including packet delivery ratio, throughput, and energy consumption efficiency. To make a routing decision based on our scheme, a node needs to first determine whether cooperation on each link is necessary or not, and if necessary, select the optimal cooperative scheme as well as the optimal relay. To do so, we calculate and compare cooperative routing metric values for each potential relay for each different cooperative MAC scheme (C-ARQ and CoopMAC in this study, and further choose the best value and compare it with the noncooperative link metric. Using the final optimal metric value instead of the traditional metric value at the routing layer, new optimal paths are set up in multihop ad hoc networks, by taking into account the cooperative benefits from the MAC layer. The network performance of the cooperative routing solution is demonstrated using a simple network topology. Cynthia Jayapal 2010-09-01 Full Text Available Providing efficient and scalable service provisioning in Mobile Ad Hoc Network (MANET is a big research challenge. In adaptive service provisioning mechanism an adaptive election procedure is used to select a coordinator node. The role of a service coordinator is crucial in any distributed directory based service provisioning scheme. The existing coordinator election schemes use either the nodeID or a hash function to choose the coordinator. In these schemes, the leader changes are more frequent due to node mobility. We propose an adaptive scheme that makes use of an eligibility factor that is calculated based on the distance to the zone center, remaining battery power and average speed to elect a core node that change according to the network dynamics. We also retain the node with the second highest priority as a backup node. Our algorithm is compared with the existing solution by simulation and the result shows that the core node selected by us is more stable and hence reduces the number of handoffs. This in turn improves the service delivery performance by increasing the packet delivery ratio and decreasing the delay, the overhead and the forwarding cost. 17. Two Dimensional Connectivity for Vehicular Ad-Hoc Networks Farivar, Masoud; Ashtiani, Farid 2008-01-01 In this paper, we focus on two-dimensional connectivity in sparse vehicular ad hoc networks (VANETs). In this respect, we find thresholds for the arrival rates of vehicles at entrances of a block of streets such that the connectivity is guaranteed for any desired probability. To this end, we exploit a mobility model recently proposed for sparse VANETs, based on BCMP open queuing networks and solve the related traffic equations to find the traffic characteristics of each street and use the results to compute the exact probability of connectivity along these streets. Then, we use the results from percolation theory and the proposed fast algorithms for evaluation of bond percolation problem in a random graph corresponding to the block of the streets. We then find sufficiently accurate two dimensional connectivity-related parameters, such as the average number of intersections connected to each other and the size of the largest set of inter-connected intersections. We have also proposed lower bounds for the case ... 18. A Comparision Study of Common Routing Protocols Used In Wireless Ad-Hoc Networks 2013-04-01 Full Text Available The aim of this study is to analyze and compare performance of both reactive and proactive Mobile Ad hoc Networks (MANETs routing protocols using different environments. Wireless networks are divided into two types: infrastructure and ad hoc network. In wireless ad hoc networks each node can be a sender, router and receiver, so these types of network are less structure compared to infrastructure network. Therefore wireless ad hoc networks need special routing protocols to overcome the limitations of wireless ad hoc networks. Wireless ad hoc networks routing protocols can be categorized into two types: reactive (on demand routing protocols and proactive routing protocols. In proactive routing protocols the nodes periodically send control messages across the network to build routing table. Different routing protocols have been simulated using GloMoSim (Global Mobile Information system simulation library and PARSEC compiler. Five multi-hop wireless ad hoc network routing protocols have been simulated to cover a range of design choices: Wireless Routing Protocol (WRP, Fisheye State Routing (FSR, Dynamic Source Routing (DSR, Ad hoc On-demand Distance Vector (AODV and Location Aided Routing (LAR. The protocols are evaluated in different environments to investigate performance metrics. Performance metric includes the following aspects: packets deliver ratio, end-to-end delay and end-to-end throughput. 19. Performance Analysis of Mobile Ad-Hoc Network Routing Protocols using Network Simulator – 2 S. Manikandan 2015-11-01 Full Text Available Ad-hoc network is a network which consists of nodes that use a wireless interface to send packet data. Since the nodes in a network of this kind can serve as routers and hosts, they can forward packets on behalf of other nodes and run user application. A mobile ad-hoc network (MANET is probably the most well-known example of this networking paradigm have been around for over twenty years, mainly exploited to design tactical networks. Furthermore, the multi-hop ad-hoc networking paradigm is often used for building sensor networks to study, control, monitor events and phenomena. To exploit these potentialities, modeling, simulation and theoretical analyses have to be complemented by real experiences, which provide both a direct evaluation of ad-hoc networks and at the same time precious information for a realistic modeling of these systems. Different routing protocols namely Ad-hoc On-demand Distance Vector (AODV protocol, Dynamic Source Routing (DSR protocol and Destination Sequenced Distance Vector (DSDV protocol in MANET are compared and the performance are evaluated based on various metrics like Packet Delivery ratio, Avg. end-to-end delay, throughput, etc. For this purpose, a discrete event simulator known as NS2 is used. 20. High Throughput via Cross-Layer Interference Alignment for Mobile Ad Hoc Networks 2013-08-26 hoc networks ( MANETS ) under practical assumptions. Several problems were posed and solved that provide insight into when and how interference alignment...REPORT High Throughput via Cross-Layer Interference Alignment for Mobile Ad Hoc Networks 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: Recent...investigations into the fundamental limits of mobile ad hoc networks have produced a physical layer method for approaching their capacity. This strategy, known 1. SIMULATION STUDY OF BLACKHOLE ATTACK IN THE MOBILE AD HOC NETWORKS SHEENU SHARMA 2009-06-01 Full Text Available A wireless ad hoc network is a temporary network set up by wireless nodes usually moving randomly and communicating without a network infrastructure. Due to security vulnerabilities of the routing protocols, however, wireless ad hoc networks may be unprotected against attacks by the malicious nodes. In this study we investigated the effects of Blackhole attacks on the network performance. We simulated Blackhole attacks in Qualnet Simulator and measured the packet loss in the network with and without a blackhole. The simulation is done on AODV (Ad hoc On Demand Distance Vector Routing Protocol. The network performance in the presence of a blackhole is reduced up to 26%. 2. Anomaly detection using clustering for ad hoc networks -behavioral approach- 2012-06-01 Full Text Available Mobile   ad   hoc   networks   (MANETs   are   multi-hop   wireless   networks   ofautonomous  mobile  nodes  without  any  fixed  infrastructure.  In  MANETs,  it  isdifficult to detect malicious nodes because the network topology constantly changesdue  to  node  mobility.  Intrusion  detection  is  the  means  to  identify  the  intrusivebehaviors and provide useful information to intruded systems to respond fast and toavoid  or  reduce  damages.  The  anomaly  detection  algorithms  have  the  advantagebecause  they  can  detect  new  types  of  attacks  (zero-day  attacks.In  this  paper,  wepresent  a  Intrusion  Detection  System  clustering-based  (ID-Cluster  that  fits  therequirement of MANET. This dissertation addresses both routing layer misbehaviorsissues,  with  main  focuses  on  thwarting  routing  disruption  attack  Dynamic  SourceRouting  (DSR.  To  validate  the  research,  a  case  study  is  presented  using  thesimulation with GloMoSum at different mobility levels. Simulation results show thatour  proposed  system  can  achieve  desirable  performance  and  meet  the  securityrequirement of MANET. 3. Intelligent Information Dissemination Scheme for Urban Vehicular Ad Hoc Networks Jinsheng Yang 2015-01-01 Full Text Available In vehicular ad hoc networks (VANETs, a hotspot, such as a parking lot, is an information source and will receive inquiries from many vehicles for seeking any possible free parking space. According to the routing protocols in literature, each of the vehicles needs to flood its route discovery (RD packets to discover a route to the hotspot before sending inquiring packets to the parking lot. As a result, the VANET nearby an urban area or city center may incur the problem of broadcast storm due to so many flooding RD packets during rush hours. To avoid the broadcast storm problem, this paper presents a hotspot-enabled routing-tree based data forwarding method, called the intelligent information dissemination scheme (IID. Our method can let the hotspot automatically decide when to build the routing-tree for proactive information transmissions under the condition that the number of vehicle routing discoveries during a given period exceeds a certain threshold which is calculated through our developed analytical packet delivery model. The routing information will be dynamically maintained by vehicles located at each intersection near the hotspot if the maintaining cost is less than that of allowing vehicles to discover routes themselves. Simulation results show that this method can minimize routing delays for vehicles with lower packets delivery overheads. 4. Performance improvement in geographic routing for Vehicular Ad Hoc Networks. Kaiwartya, Omprakash; Kumar, Sushil; Lobiyal, D K; Abdullah, Abdul Hanan; Hassan, Ahmed Nazar 2014-11-25 Geographic routing is one of the most investigated themes by researchers for reliable and efficient dissemination of information in Vehicular Ad Hoc Networks (VANETs). Recently, different Geographic Distance Routing (GEDIR) protocols have been suggested in the literature. These protocols focus on reducing the forwarding region towards destination to select the Next Hop Vehicles (NHV). Most of these protocols suffer from the problem of elevated one-hop link disconnection, high end-to-end delay and low throughput even at normal vehicle speed in high vehicle density environment. This paper proposes a Geographic Distance Routing protocol based on Segment vehicle, Link quality and Degree of connectivity (SLD-GEDIR). The protocol selects a reliable NHV using the criteria segment vehicles, one-hop link quality and degree of connectivity. The proposed protocol has been simulated in NS-2 and its performance has been compared with the state-of-the-art protocols: P-GEDIR, J-GEDIR and V-GEDIR. The empirical results clearly reveal that SLD-GEDIR has lower link disconnection and end-to-end delay, and higher throughput as compared to the state-of-the-art protocols. It should be noted that the performance of the proposed protocol is preserved irrespective of vehicle density and speed. 5. Algorithmic aspects of topology control problems for ad hoc networks Liu, R. (Rui); Lloyd, E. L. (Errol L.); Marathe, M. V. (Madhav V.); Ramanathan, R. (Ram); Ravi, S. S. 2002-01-01 Topology control problems are concerned with the assignment of power values to nodes of an ad hoc network so that the power assignment leads to a graph topology satisfying some specified properties. This paper considers such problems under several optimization objectives, including minimizing the maximum power and minimizing the total power. A general approach leading to a polynomial algorithm is presented for minimizing maximum power for a class of graph properties, called monotone properties. The difficulty of generalizing the approach to properties that are not monoione is pointed out. Problems involving the minimization of total power are known to be NP-complete even for simple graph properties. A general approach that leads to an approximation algorithm for minimizing the total power for some monotone properties is presented. Using this approach, a new approximation algorithm for the problem of minimizing the total power for obtaining a 2-node-connected graph is obtained. It is shown that this algorithm provides a constant performance guarantee. Experimental results from an implementation of the approximation algorithm are also presented. 6. Ad-hoc network DOA tracking via sequential Monte Carlo filtering GUO Li; GUO Yan; LIN Jia-ru; LI Ning 2007-01-01 A novel sequential Monte Carlo (SMC) algorithm is provided for the multiple maneuvering Ad-hoc network terminals direction of arrival (DOA) tracking. A nonlinear mobility and observation model is adopted, which can describe the motion features of the Ad-hoc network terminal more practically. The algorithm does not need any additional measurement equipment. Simulation result shows its significant tracking accuracy. Sözer, H.; Tekkalmaz, M.; Korpeoglu, I. 2009-01-01 Deployment of traditional peer-to-peer file sharing systems on a wireless ad-hoc network introduces several challenges. Information and workload distribution as well as routing are major problems for members of a wireless ad-hoc network, which are only aware of their immediate neighborhood. In this 8. An Efficient Quality of Service Based Routing Protocol for Mobile Ad Hoc Networks Tapan Kumar Godder 2011-05-01 Full Text Available Ad-hoc network is set up with multiple wireless devices without any infrastructure. Its employment is favored in many environments. Quality of Service (QoS is one of the main issues for any network and due to bandwidth constraint and dynamic topology of mobile ad hoc networks, supporting Quality of Service (QoS is extremely a challenging task. It is modeled as a multi-layer problem and is considered in both Medium Access Control (MAC and routing layers for ad hoc networks. Ad-hoc On-demand Distance Vector (AODV routing protocol is one of the most used and popular reactive routing protocols in ad-hoc networks. This paper proposed a new protocol QoS based AODV (QAODV which is a modified version of AODV. 9. Performance modeling of data dissemination in vehicular ad hoc networks Chaqfeh, Moumena; Lakas, Abderrahmane; Lazarova-Molnar, Sanja 2013-01-01 ad hoc nature which does not require fixed infrastructure or centralized administration. However, designing scalable information dissemination techniques for VANET applications remains a challenging task due to the inherent nature of such highly dynamic environments. Existing dissemination techniques...... often resort to simulation for performance evaluation and there are only few studies that offer mathematical modeling. In this paper we provide a comparative study of existing performance modeling approaches for data dissemination techniques designed for different VANET applications.... 10. An Enhanced Route Recovery Protocol for Mobile Ad Hoc Networks Kim, Sangkyung; Park, Noyeul; Kim, Changhwa; Choi, Seung-Sik In case of link failures, many ad hoc routing protocols recover a route by employing source-initiated route re-discovery, but this approach can degrade system performance. Some use localized route recovery, which may yield non-optimal paths. Our proposal provides a mechanism that can enhance the overall routing performance by initiating route recovery at the destination node. We elucidate the effects through simulations including comparisons with AODV and AODV with local repair. 郎文华; 周明天 2002-01-01 In Ad-hoc networks, key establishment protocol is mainly contributory ,Diffie-Hellman based key agree ment protocol. In this paper, Several typical protocols are analysed and evaluated, and then, their suitability is discussed from the point of view of Ad-Hoc networks. 12. Design and Implementation of Anycast Services in Ad Hoc Networks Connected to IPv6 Networks Xiaonan Wang 2010-04-01 Full Text Available The paper proposes a communication model of implementing an Anycast service in an Ad Hoc network which is connected to IPv6 networks where IPv6 nodes can obtain the Anycast service provided by the Ad hoc network. In this model when an Anycast mobile member in an Ad hoc network moves it can keep the existing communications with its corresponding nodes to continue providing the Anycast services with good quality of service to IPv6 nodes. This model creates a new kind of IPv6 address auto-configuration scheme which does not need the address duplication detection. This paper deeply discusses and analyzes the model and the experimental data prove its validity and efficiency. 13. Contribution to design a communication framework for vehicular ad hoc networks in urban scenarios Tripp Barba, Carolina 2013-01-01 The constant mobility of people, the growing need to be always connected, the large number of vehicles that nowadays can be found in the roads and the advances in technology make Vehicular Ad hoc Networks (VANETs) be a major area of research. Vehicular Ad hoc Networks are a special type of wireless Mobile Ad hoc Networks (MANETs), which allow a group of mobile nodes configure a temporary network and maintain it without the need of a fixed infrastructure. A vehicular network presents some spec... 刘静; 王赜 2012-01-01 To further enhance the security of Ad Hoc networks, it presents a novel transitive trusted chain with Trusted Platform Module (TPM) in Ad Hoc networks. A scheme which extends the trusted relationships from the node of Ad Hoc networks to Ad Hoc networks as its design objective is proposed, and the trusted relationships between peers can be evaluated with a trusted model in Ad Hoc networks. Authenticated Routing for Ad hoc Networks (ARAN) is improved by introduction of trusted level and then it selects the routing of highest trusted level. The trusted transfer model in Ad Hoc networks is analyzed.%为了进一步提高Ad Hoc网络的安全性,提出一种利用可信平台模块传递信任链的方案.该方案以将信任关系从Ad Hoc网络节点扩展至Ad Hoc网络为设计目标,利用信任模型评估每个节点的信任度,在ARAN安全路由协议的基础上,结合信任度对ARAN安全路由协议进行了改进,选出一条可信度最高的路由,对可信链传递方案进行性能分析. Mutanga, MB 2011-09-01 Full Text Available Lack of manual management mechanisms in wireless ad-hoc networks means that automatic configuration of IP addresses and other related network parameters are very crucial. Many IP address autoconfiguration mechanisms have been proposed in literature... 16. A novel technique for node authentication in mobile ad hoc networks Srinivas Aluvala 2016-09-01 Full Text Available Mobile ad hoc network is a collection of nodes in mobility that communicate to one another forming a network through wireless links, in which each node acts a router and forward packets to destinations. The dynamic topology and self-organizing of the nodes make them more vulnerable to the network. In MANET, the major challenging task is to provide security during the routing of data packets. Various kinds of attacks have been studied in ad hoc networks, but no proper solution found for these attacks. So, preventing the malicious nodes from destroying the network plays vital role in ad hoc networks. In this paper, a novel technique has been proposed to provide node authentication while a new node joining into the network and before initiating route discovery process in mobile ad hoc networks. Also shown how the proposed technique mitigates the impact of attacks on nodes. 17. A Novel Approach for Attacks Mitigation in Mobile Ad Hoc Networks Using Cellular Automatas 2012-04-01 Full Text Available Many security schemes for mobile ad-hoc network(MANET have been proposed so far but none of them has been successful in combating the different types of attacks that a mobile ad-hoc network often faces. This paper is providing one way of mitigating attacks in mobile ad-hoc networks by authenticating the node who tries to access this network .This scheme has been applied by using cellular automata (CA. Our simulation results show how cellular automata(CA is implemented for user authentication and secure transmission in MANET. 18. Clustering in mobile ad hoc network based on neural network CHEN Ai-bin; CAI Zi-xing; HU De-wen 2006-01-01 An on-demand distributed clustering algorithm based on neural network was proposed. The system parameters and the combined weight for each node were computed, and cluster-heads were chosen using the weighted clustering algorithm, then a training set was created and a neural network was trained. In this algorithm, several system parameters were taken into account, such as the ideal node-degree, the transmission power, the mobility and the battery power of the nodes. The algorithm can be used directly to test whether a node is a cluster-head or not. Moreover, the clusters recreation can be speeded up. 19. Analysis of Fuzzy Logic Based Intrusion Detection Systems in Mobile Ad Hoc Networks A. Chaudhary 2014-01-01 Full Text Available Due to the advancement in wireless technologies, many of new paradigms have opened for communications. Among these technologies, mobile ad hoc networks play a prominent role for providing communication in many areas because of its independent nature of predefined infrastructure. But in terms of security, these networks are more vulnerable than the conventional networks because firewall and gateway based security mechanisms cannot be applied on it. That’s why intrusion detection systems are used as keystone in these networks. Many number of intrusion detection systems have been discovered to handle the uncertain activity in mobile ad hoc networks. This paper emphasized on proposed fuzzy based intrusion detection systems in mobile ad hoc networks and presented their effectiveness to identify the intrusions. This paper also examines the drawbacks of fuzzy based intrusion detection systems and discussed the future directions in the field of intrusion detection for mobile ad hoc networks. 20. Ad Hoc Network Architecture for Multi-Media Networks 2007-12-01 On-demand Distance Vector ( AODV ) The AODV routing protocol is an improvement to the previously mentioned DSDV algorithm. In AODV , a node is not...The routing protocol adopted by Sun SPOT is the AODV routing protocol . 1. Star Topology In the star network topology, the simulation begins with...9 e. Energy Efficiency......................................................................9 B. ROUTING PROTOCOL Rana Asif Rehman 2015-01-01 2. Research on multihop wireless ad hoc network and its routing protocols Li, Dan; Zhu, Qiuping 2004-04-01 An ad hoc network is a collection of wireless mobile nodes dynamically forming a temporary network without using any existing network infrastructure or centralized administration. Because of its acentric, self-organized, fast deployable and mobile property, people are paying more attention to the using of ad hoc network in the emergency. In such an environment, multiple networks "hops" may be needed for one node to exchange data with another across the network, due to the limited range of each mobile node"s wireless transmissions. So for an ad hoc network, a rational routing protocol is especially important. The paper will analyze the character of ad hoc network and the special requirement for communication protocols. Toward the routing protocols, this paper presents four multi-hop wireless ad hoc network routing protocols that cover a range of design choices: DSDV (Destination Sequenced Distance Vector), TORA (Temporally Ordered Routing Algorithm), DSR (Dynamic Source Routing), and AODV (Ad hoc On-demand Distance Vector). After thoroughly analyzed the network structure and routing protocols, the paper gives the proposal of a new hybrid routing protocol and the view for future work. 3. Beamspace Multiple Input Multiple Output. Part II: Steerable Antennas in Mobile Ad Hoc Networks 2016-09-01 Networks with Beamforming Antennas.” Proceedings of the 2nd ACM International Symposium on Mobile Ad Hoc Networking & Computing (pp. 95–105). October...4–5, Long Beach, CA. ACM . 19. Choudhury, R. R., X. Yang, R. Ramanathan, and N. H. Vaidya. 2002. “Using Directional Antennas for Medium Access...Atlanta, GA. ACM . 20. Ramanathan, R., J. Redi, J., C. Santivanez, D. Wiggins, and S. Polit. 2005. “Ad Hoc Networking with Directional Antennas: A 王晓华; 贾继洋 2014-01-01 Aiming to solve the problem of the contemporary communication system in the absence of any network, a new scheme of the Ad-hoc network based on ARM-Linux system is designed. Firstly, this paper introduces the transportation of the Linux2.6.36 and the driver of RT3070 on ARM11 platform. Then based on socket programming in TCP/IP, program designing of communication and tests are conducted on the ARM-Linux platform. Results have proved that the Ad-hoc network can use the least resources and costs to achieve reliable high rate communication. It is significant for practical application.%针对传统通信系统不能满足在无任何网络情况下的通信需求现状,设计并实现了一种由 ARM-Linux 系统及其外围部件组成的无线自组网(Ad-hoc)的通信方案.在 ARM11平台上完成了 Linux2.6.36操作系统和RT3070无线通信模块的驱动程序移植.利用基于TCP/IP协议的socket编程,编写测试程序,进行节点间无线通信传输实验.实验结果表明:本文搭建的Ad-hoc网络,可以用最少的资源和成本,实现节点间可靠的无线高速率通信,具有现实应用意义. 5. Implement DUMBO as a Network Based on Mobile Ad hoc Network (MANETs 2011-10-01 Full Text Available Nowadays there are a large variety of wireless access networks. One of these networksis Digital Ubiquitous Mobile Broadband OLSR (DUMBO which has been stronglymotivated by the fact that large scale natural disasters can wipe out terrestrialcommunication infrastructure. DUMBO routers can automatically form one or more selfconfiguring,self-healing networks called Mobile Ad hoc Networks (MANET. VehicleAd hoc Network (VANETs is an advanced version of MANETs. VANETs is offered tobe used by network service providers for managing connection to get a high performanceat real time, high bandwidth and high availability in networks such as WLAN, UMTS,Wi-MAX and etc. In this paper surveying DUMBONET Routers with relevant algorithm,approaches and solutions from the literature, will be consider. 6. A NOVEL APPROACH FOR INFORMATION SECURITY IN AD HOC NETWORKS THROUGH SECURE KEY MANAGEMENT S. Suma Christal Mary 2013-01-01 Full Text Available Ad hoc networks provide flexible and adaptive networks with no fixed infrastructure and dynamic topology. Owe to the vulnerability nature of ad hoc network, there are lots of security threats that diminish the development of ad hoc networks. Therefore, to provide security for information of users and to preserve their privacy, it becomes mandatory to use cryptographic techniques to set up secure mobile ad hoc network. Earlier cryptographic method based on computational complexity ruins with the advent of fast computing computers. In this proposal, we proposed Secure Key Management (SKM framework. We make use of McEliece algorithm embedded with Dispense Key designed for key generation and for the key distribution and it is highly scalable with respect to memory. The experimental result shows that our framework provides a high-performance platform to execute key generation, key distribution scenarios. SKM framework reduces execution time of encryption and decryption by minimizing the number of keys. 7. HEAD: A Hybrid Mechanism to Enforce Node Cooperation in Mobile Ad Hoc Networks QUO Jianli; LIU Hongwei; DONG Jian; YANG Xiaozong 2007-01-01 8. Mobile Codes Localization in Ad hoc Networks: a Comparative Study of Centralized and Distributed Approaches Zafoune, Youcef; kanawati, Rushed; 10.5121/ijcnc.2010.2213 2010-01-01 This paper presents a new approach in the management of mobile ad hoc networks. Our alternative, based on mobile agent technology, allows the design of mobile centralized server in ad hoc network, where it is not obvious to think about a centralized management, due to the absence of any administration or fixed infrastructure in these networks. The aim of this centralized approach is to provide permanent availability of services in ad hoc networks which are characterized by a distributed management. In order to evaluate the performance of the proposed approach, we apply it to solve the problem of mobile code localization in ad hoc networks. A comparative study, based upon a simulation, of centralized and distributed localization protocols in terms of messages number exchanged and response time shows that the centralized approach in a distributed form is more interesting than a totally centralized approach. 9. Mobility Models for Next Generation Wireless Networks Ad Hoc, Vehicular and Mesh Networks Santi, Paolo 2012-01-01 Mobility Models for Next Generation Wireless Networks: Ad Hoc, Vehicular and Mesh Networks provides the reader with an overview of mobility modelling, encompassing both theoretical and practical aspects related to the challenging mobility modelling task. It also: Provides up-to-date coverage of mobility models for next generation wireless networksOffers an in-depth discussion of the most representative mobility models for major next generation wireless network application scenarios, including WLAN/mesh networks, vehicular networks, wireless sensor networks, and 10. Voice Communications over 802.11 Ad Hoc Networks: Modeling, Optimization and Call Admission Control Xu, Changchun; Xu, Yanyi; Liu, Gan; Liu, Kezhong Supporting quality-of-service (QoS) of multimedia communications over IEEE 802.11 based ad hoc networks is a challenging task. This paper develops a simple 3-D Markov chain model for queuing analysis of IEEE 802.11 MAC layer. The model is applied for performance analysis of voice communications over IEEE 802.11 single-hop ad hoc networks. By using the model, we finish the performance optimization of IEEE MAC layer and obtain the maximum number of voice calls in IEEE 802.11 ad hoc networks as well as the statistical performance bounds. Furthermore, we design a fully distributed call admission control (CAC) algorithm which can provide strict statistical QoS guarantee for voice communications over IEEE 802.11 ad hoc networks. Extensive simulations indicate the accuracy of the analytical model and the CAC scheme. 11. Mitigating Malicious Attacks Using Trust Based Secure-BEFORE Routing Strategy in Mobile Ad Hoc Networks Shah, Rutuja; Subramaniam, Sumathy; Lekala Dasarathan, Dhinesh Babu 2016-01-01 Mobile ad hoc Networks (MANET), being infrastructureless and dynamic in nature, are predominantly susceptible to attacks such as black hole, worm hole, cunning gray hole attack at source or destination... 12. DESAIN ALGORITMA DAN SIMULASI ROUTING UNTUK GATEWAY AD HOC WIRELESS NETWORKS Nixson Meok 2009-12-01 Full Text Available   Routing protocol to the wireless ad hoc networks is very needed in the communication process between some terminals, to send the data packet through one or several node(s to the destination address where the network typology is always changing. Many previous works that discussed about routing ad hoc both for manet (mobile ad hoc networks and wireless networks, but the emphasis have more focus on comparing the performance of several routing ad hoc. While in this work, there is a bulding of routing algorithm model to gateway in land to the nodes that analogized as a boat that move on the sea. With the assumption that the communication inter terminals to radio band of Very High Frequency, thus algorithm that built in the simulation based on the range gap of the HF frequency. The result of this simulation will be developed as the platform to implement the service development of multiuser communication 13. A MOBILE AGENT BASED INTRUSION DETECTION SYSTEM ARCHITECTURE FOR MOBILE AD HOC NETWORKS Binod Kumar Pattanayak 2014-01-01 Full Text Available Applications of Mobile Ad Hoc Networks (MANETs have become extensively popular over the years among the researchers. However, the dynamic nature of MANETs imposes a set of challenges to its efficient implementation in practice. One of such challenges represents intrusion detection and prevention procedures that are intended to provide secured performance of ad hoc applications. In this study, we introduce a mobile agent based intrusion detection and prevention architecture for a clustered MANET. Here, a mobile agent resides in each cluster of the ad hoc network and each cluster runs a specific application at any point of time. This application specific approach makes the network more robust to external intrusions directed at the nodes in an ad hoc network. 14. A Group Based Key Sharing and Management Algorithm for Vehicular Ad Hoc Networks Zeeshan Shafi Khan 2014-01-01 15. User-centred and context-aware identity management in mobile ad-hoc networks Arabo, Abdullahi 2013-01-01 The emergent notion of ubiquitous computing makes it possible for mobile devices to communicate and provide services via networks connected in an ad-hoc manner. These have resulted in the proliferation of wireless technologies such as Mobile Ad-hoc Networks (MANets), which offer attractive solutions for services that need flexible setup as well as dynamic and low cost wireless connectivity. However, the growing trend outlined above also raises serious concerns over Identity Management (IM) du... 16. On-Demand Key Distribution for Mobile Ad-Hoc Networks 2007-03-01 ON-DEMAND KEY DISTRIBUTION FOR MOBILE AD-HOC NETWORKS THESIS Mr. Daniel F . Graham AFIT...for the Degree of Master of Science in Computer Science Mr. Daniel F . Graham, BS March 2007 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...UNLIMITED AFIT/GCS/ENG/07-12 ON-DEMAND KEY DISTRIBUTION FOR MOBILE AD-HOC NETWORKS Mr. Daniel F . Graham, BS 17. Review Strategies and Analysis of Mobile Ad Hoc Network- Internet Integration Solutions Rakesh Kumar 2010-07-01 18. LINK STABILITY WITH ENERGY AWARE AD HOC ON DEMAND MULTIPATH ROUTING PROTOCOL IN MOBILE AD HOC NETWORKS Senthil Murugan Tamilarasan 2013-01-01 Full Text Available Mobile Ad Hoc Network is one of the wireless network in which mobile nodes are communicate with each other and have no infrastructure because no access point. The MANET protocols can be classified as proactive and reactive routing protocol. The proactive routing protocols, all nodes which participated in network have routing table. This table updated periodically and is used to find the path between source and destination. The reactive routing protocol, nodes are initiate route discovery procedure when on-demand routes. In order to find the better route in MANET, many routing protocols are designed in the recent years. But those protocols are not concentrating about communication links and battery energy. Communication links between nodes and energy of nodes are very important factor for improving quality of routing protocols. This study presents innovative Link Stability with Energy Aware (LSEA multipath routing protocol. The key idea of the protocol is find the link quality, maximum remaining energy and lower delay. Reflections of these factors are increasing the qualities like packet delivery ratio, throughputs and reduce end-to-end delay. The LSEAMRP was simulated on NS-2 and evaluation results also shown. 19. Towards Trust-based Cognitive Networks: A Survey of Trust Management for Mobile Ad Hoc Networks 2009-06-01 Coast, Australia, 21-26 May 2006. [4] P. Albers , O. Camp, J.-M. Percher, B. Jouga, L. Mé, and R. Puttini, “Security in Ad Hoc Networks: a General...10th Euromicro Workshop on Parallel, Distributed, and Network-based Processing, Canary Islands, Spain, Jan . 2002, pp. 403-410. [7] S. Buchegger and...DETECTION Albers et al.(2002) [4] -Direct observation for anomaly detection or misuse detection -General misbehaving nodes No experimental results 20. Handbook on theoretical and algorithmic aspects of sensor, ad hoc wireless, and peer-to-peer networks Wu, Jie 2005-01-01 PrefaceAD HOC WIRELESS NETWORKSA Modular Cross Layer Architecture for Ad Hoc Networks, M. Conti, J. Crowcroft, G. Maselli, and G. TuriRouting Scalability in MANETs, J. Eriksson, S. Krishnamurthy and M. FaloutsosUniformly Distributed Algorithm for Virtual Backbone Routing in Ad Hoc Wireless Networks, D.S. KimMaximum Necessary Hop Count for Packet Routing in MANET, X. Chen and J. ShenEfficient Strategyproof Multicast in Selfish Wireless Networks, X.-Yang LiGeocasting in Ad Hoc and Sensor Networks, I. StojmenovicTopology Control for Ad hoc Networks: Present Solutions and Open Issues, C.-C. Shen a 1. Service for fault tolerance in the Ad Hoc Networks based on Multi Agent Systems Ghalem Belalem 2011-02-01 Full Text Available The Ad hoc networks are distributed networks, self-organized and does not require infrastructure. In such network, mobile infrastructures are subject of disconnections. This situation may concern a voluntary or involuntary disconnection of nodes caused by the high mobility in the Ad hoc network. In these problems we are trying through this work to contribute to solving these problems in order to ensure continuous service by proposing our service for faults tolerance based on Multi Agent Systems (MAS, which predict a problem and decision making in relation to critical nodes. Our work contributes to study the prediction of voluntary and involuntary disconnections in the Ad hoc network; therefore we propose our service for faults tolerance that allows for effective distribution of information in the Network by selecting some objects of the network to be duplicates of information. 2. A Comparison of the TCP Variants Performance over different Routing Protocols on Mobile Ad Hoc Networks 2010-03-01 Full Text Available We describe a variant of TCP (Tahoe, Vegas, TCP is most widely used transport protocol in both wired and wireless networks. In mobile ad hoc networks, the topology changes frequently due to mobile nodes, this leads to significant packet losses and network throughput degradation. This is due to the fact that TCP fails to distinguish the path failure and network congestion. In this paper, the performances of TCP over different routing (DSR, AODV and DSDV protocols in ad hoc networks wasstudied by simulation experiments and results are reported. 3. Contribution to design a communication framework for vehicular ad hoc networks in urban scenarios Tripp Barba, Carolina 2013-01-01 La movilidad constante de las personas y la creciente necesidad de estar conectados en todo momento ha hecho de las redes vehiculares un área cuyo interés ha ido en aumento. La gran cantidad de vehículos que hay en la actualidad, y los avances tecnológicos han hecho de las redes vehiculares (VANETS, Vehicular Ad hoc Networks) un gran campo de investigación. Las redes vehiculares son un tipo especial de redes móviles ad hoc inalámbricas, las cuales, al igual que las redes MANET (Mobile Ad hoc ... 4. An implementation of traffic light system using multi-hop Ad hoc networks Ansari, Imran Shafique 2009-08-01 In ad hoc networks nodes cooperate with each other to form a temporary network without the aid of any centralized administration. No wired base station or infrastructure is supported, and each host communicates via radio packets. Each host must act as a router, since routes are mostly multi-hop, due to the limited power transmission set by government agencies, (e.g. the Federal Communication Commission (FCC), which is 1 Watt in Industrial Scientific and Medical (ISM) band. The natures of wireless mobile ad hoc networks depend on batteries or other fatiguing means for their energy. A limited energy capacity may be the most significant performance constraint. Therefore, radio resource and power management is an important issue of any wireless network. In this paper, a design for traffic light system employing ad hoc networks is proposed. The traffic light system runs automatically based on signals sent through a multi-hop ad hoc network of \\'n\\' number of nodes utilizing the Token Ring protocol, which is efficient for this application from the energy prospective. The experiment consists of a graphical user interface that simulates the traffic lights and laptops (which have wireless network adapters) are used to run the graphical user interface and are responsible for setting up the ad hoc network between them. The traffic light system has been implemented utilizing A Mesh Driver (which allows for more than one wireless device to be connected simultaneously) and Java-based client-server programs. © 2009 IEEE. 5. Review of Artificial Immune System to Enhance Security in Mobile Ad-hoc Networks Tarun Dalal 2012-04-01 Full Text Available Mobile Ad-hoc Networks consist of wireless host that communicate with each other. The routes in a Mobile Ad-hoc Network may consist of many hops through other hosts between source and destination. The hosts are not fixed in a Mobile Adhoc Network; due to host mobility topology can change any time. Mobile Ad-hoc Networks are much more vulnerable to security attacks. Current research works on securing Mobile Adhoc Networks mainly focus on confidentiality, integrity,authentication, availability, and fairness. Design of routingprotocols is very much crucial in Mobile Ad-hoc Network. There are various techniques for securing Mobile Ad-hoc Network i.e. cryptography. Cryptography provides efficient mechanism to provide security, but it creates very much overhead. So, an approach is used which is analogous to Biological Immune System, known as Artificial Immune System (AIS. There is a reason of AIS to be used for security purposes because the Human Immune System (HIS protects the body against damage from an extremely large number of harmfulbacteria, viruses, parasites and fungi, termed pathogens. It doesthis largely without prior knowledge of the structure of thesepathogens. AIS provide security by determining non-trusted nodes and eliminate all non-trusted nodes from the network. 6. Location Based Throughput Maximization Routing in Energy Constrained Mobile Ad-hoc Network V. Sumathy 2006-01-01 Full Text Available In wireless Ad-hoc network, power consumption becomes an important issue due to limited battery power. One of the reasons for energy expenditure in this network is irregularly distributed node pattern, which impose large interference range in certain area. To maximize the lifetime of ad-hoc mobile network, the power consumption rate of each node must be evenly distributed and the over all transmission range of each node must be minimized. Our protocol, Location based throughput maximization routing in energy constrained Ad-hoc network finds routing paths, which maximize the lifetime of individual nodes and minimize the total transmission energy consumption. The life of the entire network is increased and the network throughput is also increased. The reliability of the path is also increased. Location based energy constrained routing finds the distance between the nodes. Based on the distance the transmission power required is calculated and dynamically reduces the total transmission energy. I Nyoman Trisna Wirawan 2016-01-01 8. Performance evaluation of fingerprint image processing for high Security Ad-hoc network P.Velayutham; Dr.T.G.Palanivelu 2010-01-01 With the rapid development of wireless technology, various mobile devices have been developed for military and civilian applications. Defense research and development has shown increasing interest in ad-hoc networks because a military has to be mobile peer-to-peer is a good architecture for mobile communication in coalition operations. In this paper, the methodology proposed is an novel robust approach on secure fingerprint authentication and matching techniques to implement in ad-hoc wireles... 9. An Energy-Aware On-Demand Routing Protocol for Ad-Hoc Wireless Networks Veerayya, Mallapur 2008-01-01 An ad-hoc wireless network is a collection of nodes that come together to dynamically create a network, with no fixed infrastructure or centralized administration. An ad-hoc network is characterized by energy constrained nodes, bandwidth constrained links and dynamic topology. With the growing use of wireless networks (including ad-hoc networks) for real-time applications, such as voice, video, and real-time data, the need for Quality of Service (QoS) guarantees in terms of delay, bandwidth, and packet loss is becoming increasingly important. Providing QoS in ad-hoc networks is a challenging task because of dynamic nature of network topology and imprecise state information. Hence, it is important to have a dynamic routing protocol with fast re-routing capability, which also provides stable route during the life-time of the flows. In this thesis, we have proposed a novel, energy aware, stable routing protocol named, Stability-based QoS-capable Ad-hoc On-demand Distance Vector (SQ-AODV), which is an enhancement... 10. QoS-aware multicast routing protocol for Ad hoc networks Sun Baolin; Li Layuan 2006-01-01 Ad hoc wireless networks consist of mobile nodes interconnected by multihop communication paths. Unlike conventional wireless networks, ad hoc networks have no fixed network infrastructure or administrative support. Due to bandwidth constraint and dynamic topology of mobile ad hoc networks, supporting Quality of Service (QoS) is an inherently complex, difficult issue and very important research issue. MAODV (Multicast Ad hoc Ondemand Distance Vector) routing protocol provides fast and efficient route establishment between mobile nodes that need to communicate with each other. MAODV has minimal control overhead and route acquisition latency. In addition to unicast routing, MAODV supports multicast and broadcast as well.The multicast routing problem with multiple QoS constraints, which may deal with the delay, bandwidth and packet loss measurements is discussed, and a network model for researching the ad hoc network QoS multicast routing problem is described. It presents a complete solution for QoS multicast routing based on an extension of the MAODV routing protocol that deals with delay, bandwidth and packet loss measurements. The solution is based on lower layer specifics. Simulation results show that, with the proposed QoS multicast routing protocol, end-to-end delay, bandwidth and packet loss on a route can be improved in most of cases. It is an available approach to multicast routing decision with multiple QoS constraints. 11. Specification and Validation of an Edge Router Discovery Protocol for Mobile Ad Hoc Networks Kristensen, Lars Michael; Jensen, Kurt 2004-01-01 We present an industrial project at Ericsson Telebit A/S where Coloured Petri Nets (CP-nets or CPNs) have been used for the design and specification of an edge router discovery protocol for mobile ad-hoc networks. The Edge Router Discovery Protocol (ERDP) supports an edge router in a stationary...... core network in assigning network address prefixes to gateways in mobile ad-hoc networks. This paper focuses on how CP-nets and the CPN computer tools have been applied in the development of ERDP. A CPN model has been constructed that constitutes a formal executable specification of ERDP. Simulation... 12. Deny-by-Default Distributed Security Policy Enforcement in Mobile Ad Hoc Networks Alicherry, Mansoor; Keromytis, Angelos D.; Stavrou, Angelos Mobile Ad-hoc Networks (MANETs) are increasingly employed in tactical military and civil rapid-deployment networks, including emergency rescue operations and ad hoc disaster-relief networks. However, this flexibility of MANETs comes at a price, when compared to wired and base station-based wireless networks: MANETs are susceptible to both insider and outsider attacks. This is mainly because of the lack of a well-defined defense perimeter preventing the effective use of wired defenses including firewalls and intrusion detection systems. 13. The Evolution of IDS Solutions in Wireless Ad-Hoc Networks To Wireless Mesh Networks Novarun Deb 2011-12-01 Full Text Available The domain of wireless networks is inherently vulnerable to attacks due to the unreliable wireless medium. Such networks can be secured from intrusions using either prevention or detection schemes. This paper focuses its study on intrusion detection rather than prevention of attacks. As attackers keep onimprovising too, an active prevention method alone cannot provide total security to the system. Here in lies the importance of intrusion detection systems (IDS that are solely designed to detect intrusions in real time. Wireless networks are broadly classified into Wireless Ad-hoc Networks (WAHNs, Mobile Adhoc Networks (MANETs, Wireless Sensor Networks (WSNs and the most recent Wireless Mesh Networks (WMNs. Several IDS solutions have been proposed for these networks. This paper is an extension to a survey of IDS solutions for MANETs and WMNs published earlier in the sense that the present survey offers a comparative insight of recent IDS solutions for all the sub domains of wireless networks. 14. A Survey of Congestion Control in Proactive Source Routing Protocol in Mobile Ad Hoc Networks Bhagyashree S kayarkar 2014-12-01 Full Text Available In mobile ad hoc networks (MANET congestion can take place between the two intermediate nodes, when the packet is transferred from the source to the destination. The congestion in MANET is mainly due to frequent change to topology and high mobility of nodes, which lead to high loss of packet. In ad hoc network the congestion control techniques with TCP becomes difficult to handle since in ad hoc network there is high density of nodes in the network and there is frequent change to topology in the network. In this paper to control the congestion in proactive source routing protocol an error message is generated by the receiver to reduce the packet sending rate. We are using a new control message i.e., Packet Error Announcing Message called (PEAM messages. 15. A Decentralized VPN Service over Generalized Mobile Ad-Hoc Networks Fujita, Sho; Shima, Keiichi; Uo, Yojiro; Esaki, Hiroshi We present a decentralized VPN service that can be built over generalized mobile ad-hoc networks (Generalized MANETs), in which topologies can be represented as a time-varying directed multigraph. We address wireless ad-hoc networks and overlay ad-hoc networks as instances of Generalized MANETs. We first propose an architecture to operate on various kinds of networks through a single set of operations. Then, we design and implement a decentralized VPN service on the proposed architecture. Through the development and operation of a prototype system we implemented, we found that the proposed architecture makes the VPN service applicable to each instance of Generalized MANETs, and that the VPN service makes it possible for unmodified applications to operate on the networks. 16. A SURVEY OF CONGESTION CONTROL IN PROACTIVE SOURCE ROUTING PROTOCOL IN MOBILE AD HOC NETWORKS Bhagyashree S kayarkar 2015-10-01 Full Text Available In mobile ad hoc networks (MANET congestion can take place between the two intermediate nodes, when the packet is transferred from the source to the destination. The congestion in MANET is mainly due to frequent change to topology and high mobility of nodes, which lead to high loss of packet. In ad hoc network the congestion control techniques with TCP becomes difficult to handle since in ad hoc network there is high density of nodes in the network and there is frequent change to topology in the network. In this paper to control the congestion in proactive source routing protocol an error message is generated by the receiver to reduce the packet sending rate. We are using a new control message i.e., Packet Error Announcing Message called (PEAM messages. 17. Hybrid Packet-Pheromone-Based Probabilistic Routing for Mobile Ad Hoc Networks Kashkouli Nejad, Keyvan; Shawish, Ahmed; Jiang, Xiaohong; Horiguchi, Susumu Ad-Hoc networks are collections of mobile nodes communicating using wireless media without any fixed infrastructure. Minimal configuration and quick deployment make Ad-Hoc networks suitable for emergency situations like natural disasters or military conflicts. The current Ad-Hoc networks can only support either high mobility or high transmission rate at a time because they employ static approaches in their routing schemes. However, due to the continuous expansion of the Ad-Hoc network size, node-mobility and transmission rate, the development of new adaptive and dynamic routing schemes has become crucial. In this paper we propose a new routing scheme to support high transmission rates and high node-mobility simultaneously in a big Ad-Hoc network, by combining a new proposed packet-pheromone-based approach with the Hint Based Probabilistic Protocol (HBPP) for congestion avoidance with dynamic path selection in packet forwarding process. Because of using the available feedback information, the proposed algorithm does not introduce any additional overhead. The extensive simulation-based analysis conducted in this paper indicates that the proposed algorithm offers small packet-latency and achieves a significantly higher delivery probability in comparison with the available Hint-Based Probabilistic Protocol (HBPP). 18. Mitigate DoS and DDoS attacks in Mobile Ad Hoc Networks Michalas, Antonis; Komninos, Nikos; Prasad, Neeli R. 2011-01-01 in such networks. The second part presents a multiplayer game that takes place between the nodes of an ad hoc network and based on fundamental principles of game theory. By combining computational problems with puzzles, improvement occurs in the efficiency and latency of the communicating nodes and resistance......This paper proposes a technique to defeat Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in Ad Hoc Networks. The technique is divided into two main parts and with game theory and cryptographic puzzles. Introduced first is a new client puzzle to prevent DoS attacks...... in DoS and DDoS attacks. Experimental results show the effectiveness of the approach for devices with limited resources and for environments like ad hoc networks where nodes must exchange information quickly.... 19. Efficient Packet Forwarding Approach in Vehicular Ad Hoc Networks Using EBGR Algorithm Prasanth, K; Jayasudha, K; Chandrasekar, C 2010-01-01 VANETs (Vehicular Ad hoc Networks) are highly mobile wireless ad hoc networks and will play an important role in public safety communications and commercial applications. Routing of data in VANETs is a challenging task due to rapidly changing topology and high speed mobility of vehicles. Conventional routing protocols in MANETs (Mobile Ad hoc Networks) are unable to fully address the unique characteristics in vehicular networks. In this paper, we propose EBGR (Edge Node Based Greedy Routing), a reliable greedy position based routing approach to forward packets to the node present in the edge of the transmission range of source/forwarding node as most suitable next hop, with consideration of nodes moving in the direction of the destination. We propose Revival Mobility model (RMM) to evaluate the performance of our routing technique. This paper presents a detailed description of our approach and simulation results show that packet delivery ratio is improved considerably compared to other routing techniques of V... 20. Efficient Packet Forwarding Approach in Vehicular Ad Hoc Networks Using EBGR Algorithm K. Jayasudha 2010-01-01 Full Text Available VANETs (Vehicular Ad hoc Networks are highly mobile wireless ad hoc networks and will play an important role in public safety communications and commercial applications. Routing of data in VANETs is a challenging task due to rapidly changing topology and high speed mobility of vehicles. Conventional routing protocols in MANETs (Mobile Ad hoc Networks are unable to fully address the unique characteristics in vehicular networks. In this paper, we propose EBGR (Edge Node Based Greedy Routing, a reliable greedy position based routing approach to forward packets to the node present in the edge of the transmission range of source/forwarding node as most suitable next hop, with consideration of nodes moving in the direction of the destination. We propose Revival Mobility model (RMM to evaluate the performance of our routing technique. This paper presents a detailed description of our approach and simulation results show that packet delivery ratio is improved considerably compared to other routing techniques of VANET. 1. A Secure and Pragmatic Routing Protocol for Mobile Ad hoc Networks LIU Zhi-yuan 2008-01-01 An ad hoc network is a group of wireless mobile computers (or nodes), in which individual nodes cooperate by forwarding packets for each other to allow nodes to communicate beyond direct wireless transmission range. Because of node mobility and power limitations, the network topology changes frequently. Routing protocol plays an important role in the ad hoc network. A recent trend in ad hoc network routing is the reactive on-demand philosophy where routes are established only when required. As an optimization for the current Dynamic Source Routing Protocol, a secure and pragmatic routes selection scheme based on Reputation Systems was proposed. We design the Secure and Pragmatic Routing protocol and implement simulation models using GloMoSim. Simulation results show that the Secure and Pragmatic Routing protocol provides better experimental results on packet delivery ratio, power consumption and system throughput than Dynamic Source Routing Protocol. 2. Novel multi-path routing scheme for UWB Ad hoc network XU Ping-ping; YANG Cai-yu; SONG Shu-qing; BI Guang-guo 2005-01-01 The routing protocols play an important role for ad hoc networks performance. As some problems with DSR,SMR, and AMR protocols were analyzed, a new routing protocol suitable for UWB Ad hoc networks was proposed in this paper. The new routing protocol utilize an act of orientation of UWB and tries to get sufficient route information and decrease the network load caused by route discovery at the same time. Simulation results show that the routing load of the new protocol is lower and throughput is higher than that of DSR. While the node's mobility increases, these advantages become more obvious. 3. Topology-Transparent Transmission Scheduling Algorithms in Wireless Ad Hoc Networks MA Xiao-lei; WANG Chun-jiang; LIU Yuan-an; MA Lei-lei 2005-01-01 In order to maximize the average throughput and minimize the transmission slot delay in wireless Ad Hoc networks,an optimal topology-transparent transmission scheduling algorithm-multichannel Time-Spread Multiple Access(TSMA)is proposed.Further analysis is shown that the maximum degree is very sensitive to the network performance for a wireless Ad Hoc networks with N mobile nodes.Moreover,the proposed multichannel TSMA can improve the average throughput M times and decrease the average transmission slot delay M times,as compared with singlechannel TSMA when M channels are available. 4. Decentralized cooperative spectrum sensing for ad-hoc disaster relief network clusters Pratas, Nuno; Marchetti, Nicola; Prasad, Neeli R. 2010-01-01 Disaster relief networks need to be highly adaptable and resilient in order to encompass the emergency service demands. Cognitive Radio enhanced ad-hoc architecture have been put forward as candidate to enable such networks. Spectrum sensing, the cornerstone of the Cognitive Radio paradigm, has...... been the target of intensive research, of which the main common conclusion was that the achievable spectrum sensing accuracy can be greatly enhanced through the use of cooperative sensing schemes. When considering applying Cognitive Radio to ad-hoc disaster relief networks, the use of spectrum sensing... 5. Centralized cooperative spectrum sensing for ad-hoc disaster relief network clusters Pratas, Nuno; Marchetti, Nicola; Prasad, Neeli R. 2010-01-01 Disaster relief networks have to be highly adaptable and resilient. Cognitive radio enhanced ad-hoc architecture have been put forward as a candidate to enable such networks. Spectrum sensing is the cornerstone of the cognitive radio paradigm, and it has been the target of intensive research....... The main common conclusion was that the achievable spectrum sensing accuracy can be greatly enhanced through the use of cooperative sensing schemes. When considering applying Cognitive Radio to ad-hoc disaster relief networks, spectrum sensing cooperative schemes are paramount. A centralized cluster... 6. Convergence of Secure Vehicular Ad-Hoc Network and Cloud in Internet of Things Kulkarni, Nandkumar P.; Prasad, Neeli R.; Lin, Tao 2016-01-01 Vehicular Ad-hoc Network (VANET) is a highly mobile autonomous and self-organizing network of vehicles. VANET is a particular case of Mobile Ad-hoc Network (MANET). With the recent advances in the arena of Information and Communication Technology (ICT) and computing, the researchers have envisioned...... of the challenges in VANET are less computing capability, smaller onboard storage, safety, reliability, etc. Among the number of solutions proposed recently, Vehicular Cloud Computing (VCC) is one of them. VCC is a technology that provides on-demand services namely Software-as-a-Service (SaaS), Storage... 7. The Study of Routing Strategies in Vehicular Ad- Hoc Network to Enhance Security Parveen Kumar 2012-04-01 Full Text Available In VANET, or Intelligent Vehicular Ad-HocNetworking, defines an intelligent way of usingVehicular Networking. In VANET integrates onmultiple ad-hoc networking technologies such as WIFIIEEE 802.11p, WAVE IEEE 1609, WIMAX IEEE802.16, Bluetooth, IRA, and ZIGBEE for easy,accurate, effective and simple communication betweenvehicles on dynamic mobility. Effective measuressuch as media communication between vehicles canbe enabled as well as methods to track the automotivevehicles. In VANET helps in defining safety measuresin vehicles, streaming communication betweenvehicles, infotainment and TELEMATICS. 8. Analysing the Behaviour and Performance of Opportunistic Routing Protocols in Highly Mobile Wireless Ad Hoc Networks Varun G Menon 2016-10-01 9. A Prototype System for Using Multiple Radios in Directional MANET (Mobile Ad Hoc Network): A NISE Funded Applied Research Project 2013-09-01 Technical Document 3276 September 2013 A Prototype System for using Multiple Radios in Directional MANET (Mobile Ad Hoc Network) A...point, it is difficult to employ directional antennas in a mobile ad hoc network ( MANET ) as most current radio and wireless networking protocols were...September 2013 Final A Prototype System for Using Multiple Radios in Directional MANET (Mobile Ad Hoc Network) A NISE funded Applied Research 10. Minimization of energy consumption in ad hoc networks; Minimisation de la consommation d'energie dans les reseaux ad hoc Senouci, S.M.; Pujolle, G. [Paris-6 Univ., Lab. LIP6, 75 (France) 2005-04-01 An ad hoc network is a collection of wireless devices forming a temporary network independently of any administration or fixed infrastructure. The main benefits of this new generation of mobile networks are flexibility and their low cost. Wireless devices have maximum utility when they can be used 'anywhere at anytime'. However, one of the greatest limitations to that goal is the finite power supplies. Since batteries provide limited power, a general constraint of wireless communication is the short continuous operation time of mobile terminals. This constraint is more important for the ad hoc networks, since every terminal has to perform the functions of a router. Therefore, energy consumption should be a crucial issue while designing new communication protocols and particularly ad hoc routing protocols. We propose, in this paper, some extensions to the most important on-demand routing algorithm, AODV (Ad hoc On demand Distance Vector). The discovery mechanism in these extensions uses energy as a routing metric. These algorithms improve the network survivability by maintaining the network connectivity, which is the strong requirement for a high-quality communication. They carry out this objective with low message overhead for computing routes and without affecting the other network protocol layers. (authors) 11. A Globally Accessible List (GAL Based Recovery Concept In Mobile Ad-hoc Network A.K.Daniel, 2011-04-01 Full Text Available A mobile ad-hoc network is a mobile, multi-hop wireless network which is capable of autonomous operation whose primary role is to provide a reliable end to end communication between nodes in the network.However achieving reliable transmission in mobile wireless network is crucial due to change in the network topology caused by node mobility. Modern communication network is becoming increasing & diverse. This is the consequence of an increasing array of devices & services both wired & wireless. There are various protocols to facilitate communication in ad hoc network like DSR and TORA. However these approaches end up in the inefficient utilization of resources after link failure and congestion. This paper proposes an approach to get over this problem .We have added some static nodes which only keeps information related to the current working path and also helps in quick recovery in case of link failure . 12. Design and analysis of a network coding algorithm for ad hoc networks 王远; 徐华; 贾培发 2015-01-01 Network coding is proved to have advantages in both wireline and wireless networks. Especially, appropriate network coding schemes are programmed for underlined networks. Considering the feature of strong node mobility in aviation communication networks, a hop-by-hop network coding algorithm based on ad hoc networks was proposed. Compared with COPE-like network coding algorithms, the proposed algorithm does not require overhearing from other nodes, which meets confidentiality requirements of aviation communication networks. Meanwhile, it does save resource consumption and promise less processing delay. To analyze the performance of the network coding algorithm in scalable networks with different traffic models, a typical network was built in a network simulator, through which receiving accuracy rate and receiving delay were both examined. The simulation results indicate that, by virtue of network coding, the proposed algorithm works well and improves performance significantly. More specifically, it has better performance in enhancing receiving accuracy rate and reducing receiving delay, as compared with any of the traditional networks without coding. It was applied to both symmetric and asymmetric traffic flows and, in particular, it achieves much better performance when the network scale becomes larger. Therefore, this algorithm has great potentials in large-scale multi-hop aviation communication networks. 13. Energy Research and Development Administration Ad Hoc Computer Networking Group: experimental program Cohen, I. 1975-03-19 The Ad Hoc Computer Networking Group was established to investigate the potential advantages and costs of newer forms of remote resource sharing and computer networking. The areas of research and investigation that are within the scope of the ERDA CNG are described. (GHT) 14. Multipath routing and multiple description coding in ad-hoc networks: A simulation study Díaz, I.F.; Epema, D.; Jongh, J. de 2004-01-01 The nature of wireless multihop ad-hoc networks makes it a challenge to offer connections of an assured quality. In order to improve the performance of such networks, multipath routing in combination with Multiple Description Coding (MDC) has been proposed. By splitting up streams of multimedia traf 15. Performance of Ad Hoc Networks with Two-Hop Relay Routing and Limited Packet Lifetime Al Hanbali, Ahmad; Nain, Philippe; Altman, Eitan 2006-01-01 Considered is a mobile ad hoc network consisting of three types of nodes (source, destination and relay nodes) and using the two-hop relay routing protocol. Packets at relay nodes are assumed to have a limited lifetime in the network. All nodes are moving inside a bounded region according to some ra Schmidt, Ricardo de O.; Pras, Aiko; Gomes, Reinaldo; Lehnert, Ralf 2011-01-01 Ad-hoc networks are supposed to operate autonomously and, therefore, self-* technologies are fundamental to their deployment. Several of these solutions have been proposed during the last few years, covering most layers and functionalities of networking systems. Addressing is one of the critical net 17. Fostering Sociability in Learning Networks through Ad-Hoc Transient Communities Sloep, Peter 2008-01-01 Sloep, P. B. (2009). Fostering Sociability in Learning Networks through Ad-Hoc Transient Communities. In M. Purvis & B. T. R. Savarimuthu (Eds.), Computer-Mediated Social Networking. First International Conference, ICCMSN 2008, LNAI 5322 (pp. 62-75). Heidelberg, Germany: Springer. June, 11-13, 2008, 18. Fostering Sociability in Learning Networks through Ad-Hoc Transient Communities Sloep, Peter 2008-01-01 Sloep, P. B. (2009). Fostering Sociability in Learning Networks through Ad-Hoc Transient Communities. In M. Purvis & B. T. R. Savarimuthu (Eds.), Computer-Mediated Social Networking. First International Conference, ICCMSN 2008, LNAI 5322 (pp. 62-75). Heidelberg, Germany: Springer. June, 11-13, 2008, 19. Abiding Geocast for Warning Message Dissemination in Vehicular Ad Hoc Networks Yu, Qiangyuan; Heijenk, Geert 2008-01-01 Vehicular ad hoc networks (VANETs) are emerging as a new network environment for intelligent transportation systems (ITS). In many applications envisaged for VANETs, traffic information needs to be disseminated to a group of relevant vehicles and maintained for a duration of time. Here a system of a 20. New grid based test bed environment for carrying out ad-hoc networking experiments Johnson, D 2006-09-01 Full Text Available and the third is to do analysis on a real test bed network which has implemented the ad-hoc networking protocol. This paper concerns the third option. Most researchers who have done work on test bed environments have used either indoor Wifi inter-office links... 1. Using Real-World Car Traffic Dataset in Vehicular Ad Hoc Network Performance Evaluation Lucas Rivoirard 2016-12-01 Full Text Available Vehicular ad hoc networking is an emerging paradigm which is gaining much interest with the development of new topics such as the connected vehicle, the autonomous vehicle, and also new high-speed mobile communication technologies such as 802.11p and LTE-D. This paper presents a brief review of different mobility models used for evaluating performance of routing protocols and applications designed for vehicular ad hoc networks. Particularly, it describes how accurate mobility traces can be built from a real-world car traffic dataset that embeds the main characteristics affecting vehicle-to-vehicle communications. An effective use of the proposed mobility models is illustrated in various road traffic conditions involving communicating vehicles equipped with 802.11p. This study shows that such dataset actually contains additional information that cannot completely be obtained with other analytical or simulated mobility models, while impacting the results of performance evaluation in vehicular ad hoc networks. 2. Cross-Layer Interaction in Wireless Ad Hoc Networks: A Practical Example Gauthier, Vincent; Marot, Michel; Becker, Monique This paper presents the design and the performance evaluation of a joined process between the PHY (PHYsical) layer and routing layer in multi-hop wireless ad hoc networks. This cross-layer interaction between the PHY and routing layers allows each node in an ad hoc network to evaluate the performance of each path in its routing table in terms of Bit Error Rate (BER) and to classify each path accordingly. Routing information from poor quality links are not forwarded leading to the selection of high quality links during the routing process. An implementation of our cross-layer algorithm based on Ad-hoc On-demand Distance Vector (AODV) is presented along with simulation results showing significant improvements in terms of additional throughput and lower BER. Furthermore, inherent of our mechanism's design, the network overhead introduced by routing protocols is reduced.. 3. Energy-Aware Routing Protocol for Ad Hoc Wireless Sensor Networks Mann Raminder P 2005-01-01 Full Text Available Wireless ad hoc sensor networks differ from wireless ad hoc networks from the following perspectives: low energy, lightweight routing protocols, and adaptive communication patterns. This paper proposes an energy-aware routing protocol (EARP suitable for ad hoc wireless sensor networks and presents an analysis for its energy consumption in various phases of route discovery and maintenance. Based on the energy consumption associated with route request processing, EARP advocates the minimization of route requests by allocating dynamic route expiry times. This paper introduces a unique mechanism for estimation of route expiry time based on the probability of route validity, which is a function of time, number of hops, and mobility parameters. In contrast to AODV, EARP reduces the repeated flooding of route requests by maintaining valid routes for longer durations. 4. Throughput Enhancement Using Multiple Antennas in OFDM-based Ad Hoc Networks under Transceiver Impairments Zhao, Pengkai 2010-01-01 Transceiver impairments, including phase noise, residual frequency offset, and imperfect channel estimation, significantly affect the performance of Multiple-Input Multiple-Output (MIMO) system. However, these impairments are not well addressed when analyzing the throughput performance of MIMO Ad Hoc networks. In this paper, we present an analytical framework to evaluate the throughput of MIMO OFDM system under the impairments of phase noise, residual frequency offset, and imperfect channel estimation. Using this framework, we evaluate the Maximum Sum Throughput (MST) in Ad Hoc networks by optimizing the power and modulation schemes of each user. Simulations are conducted to demonstrate not only the improvement in the MST from using multiple antennas, but also the loss in the MST due to the transceiver impairments. The proposed analytical framework is further applied for the distributed implementation of MST in Ad Hoc networks, where the loss caused by impairments is also evaluated. 5. Study of Impact of Mobile Ad hoc Networking and its Future Applications Ashema Hasti 2012-01-01 Full Text Available Today, many people carry numerous portable devices, such as laptops, mobile phones, PDAs and mp3 players, for use in their professional and private lives. For the most part, these devices are used separately-that is, their applications do not interact. Imagine, however, if they could interact directly: participants at a meeting could share documents or presentations; all communication could automatically be routed through the wireless corporate campus network. These examples of spontaneous, ad hoc wireless communication between devices might be loosely defined as a scheme, often referred to as ad hoc networking, which allows devices to establish communication, anytime and anywhere without the aid of a central infrastructure. This paper describes the concept of mobile ad hoc networking (MANET and points out some of its applications that can be envisioned for future. Also, the paper presents two of the technical challenges MANET poses, which include Geocasting and QoS. 6. Performance Analysis of QoS Multicast Routing in Mobile Ad Hoc Networks Using Directional Antennas Yuan Li 2010-12-01 Full Text Available in this paper, a quality of service (QoS multicast routing protocol in mobile ad hoc networks (MANETs by using directional antennas has been presented. Many important applications, such as audio/video conferencing, require the quality of service guarantee. Directional antenna technology provides the capability for considerable increase in spatial reuse, which increases the efficiency of communication. This paper studies TDMA-based timeslot allocation and directional antennas, and presents an effective algorithm for calculating bandwidth of a multicast tree. We also propose a novel on-demand QoS multicasting routing algorithm in TDMA-based mobile ad hoc networks using directional antennas. The simulation result shows the performance of this QoS multicast routing algorithm in TDMA-based mobile ad hoc networks using directional antennas. 7. Wireless Ad-hoc Network Model for Video Transmission in the Tunnel of Mine Zhao Xu 2011-01-01 Full Text Available Wireless ad hoc networks have been widely used for its flexibility and quick development, especially in emergent conditions. Recently they are introduced to coal mines underground for rescuing after disasters such as gas explosions. Significantly, we construct a network model named Chain Model to simulate the special circumstance in the tunnel of the mine. Moreover, for studying effects of different routing protocols used in this model when transmitting video data, Dynamic Destination-Sequenced Distance-Vector (DSDV, Dynamic Source Routing (DSR and Ad hoc On Demand Distance Vector (AODV are compared with each other in the experiment based on our model. The result indicates that AODV performs best among the three protocols in this model in terms of packet loss ratio, end-to-end delay time and throughput, which is significant for our future research on ad hoc networks for rescuing in coal mines underground. 8. A Distributed Authentication Algorithm Based on GQ Signature for Mobile Ad Hoc Networks YAO Jun; ZENG Gui-hua 2006-01-01 Identity authentication plays an important role in ad hoc networks as a part of the secure mechanism. On the basis of GQ signature scheme, a new GQ threshold group signature scheme was presented, by which a novel distributed algorithm was proposed to achieve the multi-hop authentication for mobile ad hoc networks. In addition, a protocol verifying the identity with zero knowledge proofs was designed so that the reuse of certificates comes into truth. Moreover, the security of this algorithm was proved through the random oracle model. With the lower cost of computation and communication, this algorithm is efficient, secure and especially suitable for mobile ad hoc networks characterized by distributed computing, dynamic topology and multi-hop communications. 9. Reliable and Efficient Broadcasting in Asymmetric Mobile Ad Hoc Networks Using Minimized Forward Node List Algorithm Marimuthu Murugesan 2011-01-01 10. Calculation and Analysis of Destination Buffer for Multimedia Service in Mobile Ad Hoc Network ZHOU Zhong; MAO Yu-ming; JIANG Zhi-qong 2005-01-01 Jitter is one of the most important issues for multimedia real time services in future mobile ad hoc networks(MANET). A thorough theoretical analysis of the destination buffer for smoothing the jitter of the real time service in MANET is given. The theoretical results are applied in moderate populated ad hoc networks in our simulation, the simulation results show that by predicting and adjusting destination buffer in our way, Jitter will be alleviated in large part and this will contribute much to the quality of service (QOS) in MANET. 11. Energy Efficient and QoS sensitive Routing Protocol for Ad Hoc Networks Saeed Tanoli, Tariq; Khalid Khan, Muhammad 2013-12-01 Efficient routing is an important part of wireless ad hoc networks. Since in ad hoc networks we have limited resources, there are many limitations like bandwidth, battery consumption, and processing cycle etc. Reliability is also necessary since there is no allowance for invalid or incomplete information (and expired data is useless). There are various protocols that perform routing by considering one parameter but ignoring other parameters. In this paper we present a protocol that finds route on the basis of bandwidth, energy and mobility of the nodes participating in the communication. 12. An Effective Capacity Estimation Scheme in IEEE802.11-based Ad Hoc Networks H. Zafar 2012-11-01 Full Text Available Capacity estimation is a key component of any admission control scheme required to support quality of serviceprovision in mobile ad hoc networks. A range of schemes have been previously proposed to estimate residualcapacity that is derived from window-based measurements of channel estimation. In this paper a simple and improvedmechanism to estimate residual capacity in IEEE802.11-based ad hoc networks is presented. The scheme proposesthe use of a ‘forgiveness’ factor to weight these previous measurements and is shown through simulation-basedevaluation to provide accurate utilizations estimation and improved residual capacity based admission control. 13. Comparative study of Attacks on AODV-based Mobile Ad Hoc Networks Ipsa De 2011-01-01 Full Text Available In recent years, the use of mobile ad hoc networks (MANETs has been widespread in many applications, The lack of infrastructures in MANETs makes the detection and control of security hazards allthe more difficult. The security issue is becoming a major concern and bottle neck in the application of MANET. In this paper, an attempt has been made to thoroughly study the blackhole attack which is one ofthe possible attacks in ad hoc networks in routing protocol AODV with possible solution to blackhole attack detection. 14. Performance Analysis of Routing Protocols in Ad-hoc and Sensor Networking Environments L. Gavrilovska 2009-06-01 Full Text Available Ad-hoc and sensor networks are becoming an increasingly popular wireless networking concepts lately. This paper analyzes and compares prominent routing schemes in these networking environments. The knowledge obtained can serve users to better understand short range wireless network solutions thus leading to options for implementation in various scenarios. In addition, it should aid researchers develop protocol improvements reliable for the technologies of interest. Joanne Mun-Yee Lim 2016-05-01 Full Text Available Cognitive radio network and Vehicular Ad hoc Network (VANET are recent emerging concepts in wireless networking. Cognitive radio network obtains knowledge of its operational geographical environment to manage sharing of spectrum between primary and secondary users, while VANET shares emergency safety messages among vehicles to ensure safety of users on the road. Cognitive radio network is employed in VANET to ensure the efficient use of spectrum, as well as to support VANET’s deployment. Random increase and decrease of spectrum users, unpredictable nature of VANET, high mobility, varying interference, security, packet scheduling and priority assignment are the challenges encountered in a typical cognitive VANET environment. This paper provides survey and critical analysis on different challenges of cognitive radio VANET, with discussion on the open issues, challenges and performance metrics, for different cognitive radio VANET applications. 16. Inter-Cluster Routing Authentication for Ad Hoc Networks by a Hierarchical Key Scheme Yueh-Min Huang; Hua-Yi Lin; Tzone-I Wang 2006-01-01 17. Time synchronization in ad-hoc wireless sensor networks Sharma, Nishant 2013-06-01 Advances in micro-electronics and developments in the various technologies have given birth to this era of wireless sensor networks. A sensor network is the one which provides information about the surrounding environment by sensing it and clock synchronization in wireless sensor networks plays a vital role to maintain the integrity of entire network. In this paper two major low energy consumption clock synchronization algorithms, Reference Broadcast Synchronization (RBS) and Timing-Sync Protocol for Sensor Networks (TPSN) are simulated, which result in high level of accuracy, reliability, handles substantially greater node densities, supports mobility, and hence perform well under all possible conditions. 18. Performance Analysis of Mobile Ad Hoc Unmanned Aerial Vehicle Communication Networks with Directional Antennas Abdel Ilah Alshbatat 2010-01-01 Full Text Available Unmanned aerial vehicles (UAVs have the potential of creating an ad hoc communication network in the air. Most UAVs used in communication networks are equipped with wireless transceivers using omnidirectional antennas. In this paper, we consider a collection of UAVs that communicate through wireless links as a mobile ad-hoc network using directional antennas. The network design goal is to maximize the throughput and minimize the end-to-end delay. In this respect, we propose a new medium access control protocol for a network of UAVs with directional antennas. We analyze the communication channel between the UAVs and the effect of aircraft attitude on the network performance. Using the optimized network engineering tool (OPNET, we compare our protocol with the IEEE 802.11 protocol for omnidirectional antennas. The simulation results show performance improvement in end-to-end delay as well as throughput. 19. Simulation of Efficiency in Mobile Ad Hoc Networks using OMNeT++ Varun Manchikalapudi 2015-08-01 Full Text Available A network is a group of two or more computer systems linked together. There are many types of computer networks which are categorized based on topology, protocol and architecture. A Mobile Ad hoc Network (MANET is a self-configuring infrastructure less network of mobile devices connected by wireless. Ad hoc networks maintain an unfair behavior in flow control especially when considered in the case of IEEE 802.11 Mac layer. Introducing efficiency in 802.11 is not an easy task. It reduces the overall global throughput. The network is to be designed in such a way that it deals with the fairness and throughput by maximizing aggregate throughput. Such kind of network design can be efficiently implemented on an evolving simulation tool named OMNet++. 20. Enhancements for distributed certificate authority approaches for mobile wireless ad hoc networks. Van Leeuwen, Brian P.; Michalski, John T.; Anderson, William Erik 2003-12-01 Mobile wireless ad hoc networks that are resistant to adversarial manipulation are necessary for distributed systems used in military and security applications. Critical to the successful operation of these networks, which operate in the presence of adversarial stressors, are robust and efficient information assurance methods. In this report we describe necessary enhancements for a distributed certificate authority (CA) used in secure wireless network architectures. Necessary cryptographic algorithms used in distributed CAs are described and implementation enhancements of these algorithms in mobile wireless ad hoc networks are developed. The enhancements support a network's ability to detect compromised nodes and facilitate distributed CA services. We provide insights to the impacts the enhancements will have on network performance with timing diagrams and preliminary network simulation studies. 1. A Multiobjective Optimization Framework for Routing in Wireless Ad Hoc Networks Jaffrès-Runser, Katia; Gorce, Jean-Marie 2009-01-01 Wireless ad hoc networks are seldom characterized by one single performance metric, yet the current literature lacks a flexible framework to assist in characterizing the design tradeoffs in such networks. In this work, we address this problem by proposing a new modeling framework for routing in ad hoc networks, which used in conjunction with metaheuristic multiobjective search algorithms, will result in a better understanding of network behavior and performance when multiple criteria are relevant. Our approach is to take a holistic view of network management and control that captures the cross-interactions among interference management techniques implemented at various layers of the protocol stack. We present the Pareto optimal sets for an example sensor network when delay, robustness and energy are considered as performance criteria for the network. 2. A QoS Routing Protocol based on Available Bandwidth Estimation for Wireless Ad Hoc Networks Kaaniche, Heni; Frikha, Mounir; Kamoun, Farouk 2011-01-01 At the same time as the emergence of multimedia in mobile Ad hoc networks, research for the introduction of the quality of service (QoS) has received much attention. However, when designing a QoS solution, the estimation of the available resources still represents one of the main issues. This paper suggests an approach to estimate available resources on a node. This approach is based on the estimation of the busy ratio of the shared canal. We consider in our estimation the several constraints related to the Ad hoc transmission mode such as Interference phenomena. This approach is implemented on the AODV routing protocol. We call AODVwithQOS our new routing protocol. We also performed a performance evaluation by simulations using NS2 simulator. The results confirm that AODVwithQoS provides QoS support in ad hoc wireless networks with good performance and low overhead. 3. A reliable routing algorithm based on fuzzy Petri net in mobile ad hoc networks HU Zhi-gang; MA Hao; WANG Guo-jun; LIAO Lin 2005-01-01 A novel reliable routing algorithm in mobile ad hoc networks using fuzzy Petri net with its reasoning mechanism was proposed to increase the reliability during the routing selection. The algorithm allows the structured representation of network topology, which has a fuzzy reasoning mechanism for finding the routing sprouting tree from the source node to the destination node in the mobile ad hoc environment. Finally, by comparing the degree of reliability in the routing sprouting tree, the most reliable route can be computed. The algorithm not only offers the local reliability between each neighboring node, but also provides global reliability for the whole selected route. The algorithm can be applied to most existing on-demand routing protocols, and the simulation results show that the routing reliability is increased by more than 80% when applying the proposed algorithm to the ad hoc on demand distance vector routing protocol. 4. Directed Dynamic Small-World Network Model for Worm Epidemics in Mobile ad hoc Networks ZHU Chen-Ping; WANG Li; LIU Xiao-Ting; YAN Zhi-Jun 2012-01-01 We investigate the worm spreading process in mobile ad hoc networks with a susceptible-infected-recovered model on a two-dimensional plane.A medium access control mechanism operates within it,inhibiting transmission and relaying a message by using other nodes inside the node's transmitting circle during speaking.We measure the rewiring probability p with the transmitting range r and the average relative velocity (v) of the moving nodes,and map the problem into a directed dynamic small-world network.A new scaling relation for the recovered portion of the nodes reveals the effect caused by geometric distance,which has been ignored by previous models.%We investigate the worm spreading process in mobile ad hoc networks with a susceptible-infected-recovered model on a two-dimensional plane. A medium access control mechanism operates within it, inhibiting transmission and relaying a message by using other nodes inside the node's transmitting circle during speaking. We measure the rewiring probability p with the transmitting range r and the average relative velocity (v) of the moving nodes, and map the problem into a directed dynamic small-world network. A new scaling relation for the recovered portion of the nodes reveals the effect caused by geometric distance, which has been ignored by previous models. R. S.D. Wahida Banu 2012-01-01 6. Intelligent Routing using Ant Algorithms for Wireless Ad Hoc Networks S. Menaka 2013-08-01 Full Text Available Wireless network is one of the niche areas and has been a growing interest owing to their ability to control the physical environment even from remote locations. Intelligent routing, bandwidth allocation and power control techniques are the known critical factors for this network communication. It is customary to find a feasible path between the communication end point which is a challenging task in this type of network. The present study proposes an Ant Mobility Model (AMM, an on-demand, multi-path routing algorithm that exercises power control and coordinate the nodes to communicate with one another in wireless network. The main goal of this protocol is to reduce the overhead, congestion, and stagnation, while increasing the throughput of the network. It can be realized from the simulation results that AMM proves to be a promising solution for the mobility pattern in wireless networks like MANETs. Awuor, F 2011-09-01 Full Text Available to transmit at high power leading to abnormal interference in the network hence degrades network performance (i.e. low data rates, loss of connectivity among others). In this paper, the authors propose rate adaptation based on pricing (RAP) algorithm... 8. A Novel Metric For Detection of Jellyfish Reorder Attack on Ad Hoc Network B. B. Jayasingh 2010-01-01 Full Text Available Ad Hoc networks are susceptible to many attacks due to its unique characteristics such as open network architecture, stringent resource constraints, shared wireless medium and highly dynamic topology. The attacks can be of different types out of which denial of service is one of the most difficult attacks to detect and defend. Jellyfish is a new denial of service attack that exploits the end to end congestion control mechanism of TCP (Transmission Control Protocol which has a very devastating effect on the throughput. The architecture for detection of such attack should be both distributed and cooperative to suit the needs of wireless ad-hoc networks that is every node in the wireless ad-hoc network should participate in the intrusion detection. We intend to develop an algorithm that detects the jellyfish attack at a single node and that can be effectively deployed at all other nodes in the ad hoc network. We propose the novel metric that detects the Jellyfish reorder attack based on the Reorder Density which is a basis for developing a metric. The comparison table shows the effectiveness of novel metric, it also helps protocol designers to develop the counter strategies for the attack. 9. A Survey of Clustering Approaches for Mobile Ad Hoc Network Mihir Mehta 2014-02-01 Full Text Available In MANET, Clustering is the most significant research area now days. Clustering offers several advantages like it improves stability of network, enhances routing in network, efficient resource allocation among mobile nodes in network and hierarchical routing structure. This survey paper analyzes number of clustering approaches which are widely used for partitioning mobile nodes into different virtual groups. Each clustering algorithm considers different parameters for selection of Cluster Head in Cluster. Cluster Head election is invoked on demand and it is aimed to decrease the computation and communication cost in MANET. Each approach has its own pros and cons. 10. Topology-aware peer-to-peer overlay network for Ad-hoc WANG Shi-guo; JI Hong; LI Ting; MEI Jing-qing 2009-01-01 The mismatch between the structured peer-to-peer (P2P) overlay network, which is based on Hashing, and the actual physical network, leads to query repeatedly passing through some nodes in the actual route when it is applied in Ad-hoc networks. An approach of getting an appropriate node identifier (ID) bearing its local physical information is proposed, in which the traditional theory of getting node ID through Hashing the node's Internet protocol (IP) address is abandoned, and a topology-aware overlay network suiting Ad-hoc networks is constructed. The simulation results show that the overlay network constructed in the proposed method can avoid the route being iteratively accessed. Meanwhile, it can effectively minimize the latency and improve the load balance. 11. PERFORMANCE EVALUATION OF WORMHOLE SECURITY APPROACHES FOR AD-HOC NETWORKS Ismail Hababeh 2013-01-01 Full Text Available Ad-hoc networks are talented but are exposed to the risk of wormhole attacks. However, a wormhole attack can be mounted easily and forms stern menaces in networks, particularly against various ad-hoc wireless networks. The Wormhole attack distorts the network topology and decrease the network systems performance. Therefore, identifying the possibility of wormhole attacks and recognizing techniques to defend them are central to the security of wireless networks as a whole. In this study, we will summarize state of the art wormhole defense approaches, categories most of the existing typical approaches and discuss both the advantages and disadvantages of these methods. We will also point out some unfulfilled areas in the wormhole problem and provide some directions for future exploring. 12. Cluster head Election for CGSR Routing Protocol Using Fuzzy Logic Controller for Mobile Ad Hoc Network K. Venkata Subbaiah 2010-01-01 13. Beamforming in Ad Hoc Networks: MAC Design and Performance Modeling Fakih, Khalil; Diouris, Jean-Francois; Andrieux, Guillaume 2009-01-01 .... Our proposition performs jointly channel estimation and radio resource sharing. We validate the fruitfulness of the proposed MAC and we evaluate the effects of the channel estimation on the network performance... 14. SPIZ: An Effective Service Discovery Protocol for Mobile Ad Hoc Networks Noh Donggeon 2007-01-01 Full Text Available The characteristics of mobile ad hoc networks (MANETs require special care in the handling of service advertisement and discovery (Ad/D. In this paper, we propose a noble service Ad/D technique for MANETs. Our scheme avoids redundant flooding and reduces the system overhead by integrating Ad/D with routing layer. It also tracks changing conditions, such as traffic and service popularity levels. Based on a variable zone radius, we have combined push-based Ad/D with a pull-based Ad/D strategy. 15. SPIZ: An Effective Service Discovery Protocol for Mobile Ad Hoc Networks Donggeon Noh 2006-11-01 Full Text Available The characteristics of mobile ad hoc networks (MANETs require special care in the handling of service advertisement and discovery (Ad/D. In this paper, we propose a noble service Ad/D technique for MANETs. Our scheme avoids redundant flooding and reduces the system overhead by integrating Ad/D with routing layer. It also tracks changing conditions, such as traffic and service popularity levels. Based on a variable zone radius, we have combined push-based Ad/D with a pull-based Ad/D strategy. Hasan A. A. Al-Rawi 2014-01-01 Full Text Available Cognitive radio (CR enables unlicensed users (or secondary users, SUs to sense for and exploit underutilized licensed spectrum owned by the licensed users (or primary users, PUs. Reinforcement learning (RL is an artificial intelligence approach that enables a node to observe, learn, and make appropriate decisions on action selection in order to maximize network performance. Routing enables a source node to search for a least-cost route to its destination node. While there have been increasing efforts to enhance the traditional RL approach for routing in wireless networks, this research area remains largely unexplored in the domain of routing in CR networks. This paper applies RL in routing and investigates the effects of various features of RL (i.e., reward function, exploitation, and exploration, as well as learning rate through simulation. New approaches and recommendations are proposed to enhance the features in order to improve the network performance brought about by RL to routing. Simulation results show that the RL parameters of the reward function, exploitation, and exploration, as well as learning rate, must be well regulated, and the new approaches proposed in this paper improves SUs’ network performance without significantly jeopardizing PUs’ network performance, specifically SUs’ interference to PUs. Al-Rawi, Hasan A A; Yau, Kok-Lim Alvin; Mohamad, Hafizal; Ramli, Nordin; Hashim, Wahidah 2014-01-01 Cognitive radio (CR) enables unlicensed users (or secondary users, SUs) to sense for and exploit underutilized licensed spectrum owned by the licensed users (or primary users, PUs). Reinforcement learning (RL) is an artificial intelligence approach that enables a node to observe, learn, and make appropriate decisions on action selection in order to maximize network performance. Routing enables a source node to search for a least-cost route to its destination node. While there have been increasing efforts to enhance the traditional RL approach for routing in wireless networks, this research area remains largely unexplored in the domain of routing in CR networks. This paper applies RL in routing and investigates the effects of various features of RL (i.e., reward function, exploitation, and exploration, as well as learning rate) through simulation. New approaches and recommendations are proposed to enhance the features in order to improve the network performance brought about by RL to routing. Simulation results show that the RL parameters of the reward function, exploitation, and exploration, as well as learning rate, must be well regulated, and the new approaches proposed in this paper improves SUs' network performance without significantly jeopardizing PUs' network performance, specifically SUs' interference to PUs. 18. Adaptation of mobile ad-hoc network protocols for sensor networks to vehicle control applications Sato, Kenya; Matsui, Yosuke; Koita, Takahiro 2005-12-01 As sensor network applications to monitor and control the physical environment from remote locations, a mobile ad-hoc network (MANET) has been the focus of many recent research and development efforts. A MANET, autonomous system of mobile hosts, is characterized by multi-hop wireless links, absence of any cellular infrastructure, and frequent host mobility. Many kinds of routing protocols for ad-hoc network have been proposed and still actively updated, because each application has different characteristics and requirements. Since the current studies show it is almost impossible to design an efficient routing protocol to be adapted for all kinds of applications. We, therefore, have focused a certain application, inter-vehicle communication for ITS (Intelligent Transport Systems), to evaluate the routing protocols. In our experiment, we defined several traffic flow models for inter-vehicle communication applications. By using simulation, we evaluated end-to-end delay and throughput performance of data transmission for inter-vehicle communications with the existing routing protocols. The result confirms the feasibility of using some routing protocols for inter-vehicle communication services. 19. Distributed geolocation algorithm in mobile ad hoc networks using received signal strength differences Guo, Shanzeng; Tang, Helen 2012-05-01 20. On Throughput Improvement of Wireless Ad Hoc Networks with Hidden Nodes Choi, Hong-Seok; Lim, Jong-Tae In this letter, we present the throughput analysis of the wireless ad hoc networks based on the IEEE 802.11 MAC (Medium Access Control). Especially, our analysis includes the case with the hidden node problem so that it can be applied to the multi-hop networks. In addition, we suggest a new channel access control algorithm to maximize the network throughput and show the usefulness of the proposed algorithm through simulations. 1. Asymptotic Capacity of Wireless Ad Hoc Networks with Realistic Links under a Honey Comb Topology Asnani, Himanshu 2007-01-01 We consider the effects of Rayleigh fading and lognormal shadowing in the physical interference model for all the successful transmissions of traffic across the network. New bounds are derived for the capacity of a given random ad hoc wireless network that reflect packet drop or capture probability of the transmission links. These bounds are based on a simplified network topology termed as honey-comb topology under a given routing and scheduling scheme. 2. Simulation and evaluation of routing protocols for Mobile Ad Hoc Networks (MANETs) Kioumourtzis, Georgios A. 2005-01-01 Mobile Ad hoc Networks (MANETs) are of much interest to both the research community and the military because of the potential to establish a communication network in any situation that involves emergencies. Examples are search-and-rescue operations, military deployment in hostile environment, and several types of police operations. One critical open issue is how to route messages considering the characteristics of these networks. The nodes act as routers in an environment without a fixed... 3. Weighted cooperative routing for wireless mobile Ad-hoc network ZHAO Xian-jing; ZHENG Bao-yu; CHEN Chao 2007-01-01 A novel weighted cooperative routing algorithm (WCRA) is proposed in this article, which was on the basis of a weighted metric with maximal remaining energy (MRE) of the relays and the maximal received SNR (MRS) of the nodes.Moreover, a cooperative routing protocol was implemented on the basis of WCRA. Then simulation is done on network simulation (NS-2) platform to compare the performances of MRS, MRE and WCRA with that of noncooperative destination-sequenced destination-sequenced distance-vector (DSDV) protocol. The simulative results show that WCRA obtains a performance tradeoff between MRE and MRS in terms of delivery ratio and network lifetime, which can effectively improve the network lifetime at an acceptable loss of delivery ratio. 4. Context discovery using attenuated Bloom filters in ad-hoc networks Liu, F.; Heijenk, Gerhard J. 2007-01-01 A novel approach to performing context discovery in ad-hoc networks based on the use of attenuated Bloom filters is proposed in this paper. A Bloom filter is an efficient spacesaving data structure to represent context information. Attenuated Bloom filters are used to advertise the availability of c 5. Trust-based hexagonal clustering for efficient certificate management scheme in mobile ad hoc networks V S JANANI; M S K MANIKANDAN 2016-10-01 The wireless and dynamic nature of mobile ad hoc networks (MANET) render them more vulnerable to security attacks. However, providing a security mechanism implicitly has been a major challenge in such an ad-hoc environment. Certificate management plays an important role in securing an ad-hoc network.Certificate assignment, verification, and revocation complexity associated with the Public Key Infrastructure (PKI) framework is significantly large. Smaller the size of the network lesser will be the certificate management complexity. However, smaller the size, large will be the overall infrastructural cost, and also larger will be the overall redundant certificates due to multiple certificate assignment at the boundary regions, that in turn affects the prompt and accurate certificate revocation. By taking these conflicting requirements into consideration, we propose the trust-based hexagonal clustering for an efficient certificate management (THCM) scheme, to bear an absolutely protected MANET Disparate to the existing clustering techniques, we present a hexagonal geographicclustering model with Voronoi technique where trust is accomplished. In particular, to compete against attackers, we initiate a certificate management strategy in which certificate assignment, verification, and revocation are carried out efficiently. The performance of THCM is evaluated by both simulation and empirical analysis in terms of effectiveness of revocation scheme (with respect to revocation rate and time), security, and communication cost. Besides, we conduct a mathematical analysis of measuring the parameters obtained from the two platforms in multiple times. Relevant results demonstrate that our design is efficient to guarantee a secured mobile ad hoc network. 6. A Fuzzy Logic Approach to Beaconing for Vehicular Ad hoc Networks Ghafoor, Kayhan Zrar; Bakar, Kamalrulnizam Abu; Eenennaam, van Martijn; Khokhar, Rashid Hafeez; Gonzalez, Alberto J. 2011-01-01 Vehicular Ad Hoc Network (VANET) is an emerging field of technology that allows vehicles to communicate together in the absence of fixed infrastructure. The basic premise of VANET is that in order for a vehicle to detect other vehicles in the vicinity. This cognizance, awareness of other vehicles, c 7. Bottlenecks in Two-Hop Ad Hoc Networks - Dividing Radio Capacity in a Smart Way Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Cloth, L.; Mandjes, M.; van der Mei, R.; Nunez Queija, R. In two-hop ad hoc networks the available radio capacity tends to be equally shard among the contending stations, which may lead to bottleneck situations in case of unbalanced traffic routing. We propose a generic model for evaluating adaptive capacity sharing strategies. We use infinite-state 8. Ad-hoc transient communities in Learning Networks Connecting and supporting the learner Brouns, Francis 2009-01-01 Brouns, F. (2009). Ad-hoc transient communities in Learning Networks Connecting and supporting the learner. Presentation given for Korean delegation of Chonnam National University and Dankook University (researchers dr. Jeeheon Ryu and dr. Minjeong Kim and a Group of PhD and Master students). August 9. PUCA: a pseudonym scheme with user-controlled anonymity for vehicular ad-hoc networks (VANET) Förster, David; Kargl, Frank; Löhr, Hans 2014-01-01 Envisioned vehicular ad-hoc networks (VANET) standards use pseudonym certificates to provide secure and privacy-friendly message authentication. Revocation of long-term credentials is required to remove participants from the system, e.g. in case of vehicle theft. However, the current approach to rev 10. Mitigate DoS and DDoS attacks in Mobile Ad Hoc Networks Michalas, Antonis; Komninos, Nikos; Prasad, Neeli R. 2011-01-01 This paper proposes a technique to defeat Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in Ad Hoc Networks. The technique is divided into two main parts and with game theory and cryptographic puzzles. Introduced first is a new client puzzle to prevent DoS attacks... 11. Preventive Aspect of Black Hole Attack in Mobile AD HOC Network Kumar Roshan 2012-06-01 Full Text Available Mobile ad hoc network is infrastructure less type of network. In this paper we present the prevention mechanism for black hole in mobile ad hoc network. The routing algorithms are analyzed and discrete properties of routing protocols are defined. The discrete properties support in distributed routing efficiently. The protocol is distributed and not dependent upon the centralized controlling node. Important features of Ad hoc on demand vector routing (AODV are inherited and new mechanism is combined with it to get the multipath routing protocol for Mobile ad hoc network (MANET to prevent the black hole attack. When the routing path is discovered and entered into the routing table, the next step is taken by combined protocol to search the new path with certain time interval. The old entered path is refreshed into the routing table. The simulation is taken on 50 moving nodes in the area of 1000 x 1000 square meter and the maximum speed of nodes are 5m/sec. The result is calculated for throughput verses number of black hole nodes with pause time of 0 sec. to 40 sec., 120 sec. and 160 sec. when the threshold value is 1.0. 12. A survey of message diffusion portocols in mobile ad hoc networks Al Hanbali, A.M.; Ibrahim, M.; Simon, V.; Varga, E.; Carreras, I. 2008-01-01 For the last twenty years, mobile communications have experienced an explosive growth. In particular, one area of mobile communication, the Mobile Ad hoc Networks (MANETs), has attracted significant attention due to its multiple applications and its challenging research problems. On the other hand, 13. Performance Modeling of a Bottleneck Node in an IEEE 802.11 Ad-hoc Network Berg, van den Hans; Roijers, Frank; Mandjes, Michel R.H.; Kunz, T.; Ravi, S.S. 2006-01-01 The IEEE 802.11 MAC protocol, often used in ad-hoc networks, has the tendency to share the capacity equally amongst the active nodes, irrespective of their loads. An inherent drawback of this fair-sharing policy is that a node that serves as a relay-node for multiple flows is likely to become a bott 14. Performance modelieg of a bottleneck node in an IEEE 802.11 ad-hoc network Berg, J.L. van den; Mandjes, M.; Roijers, F. 2006-01-01 The IEEE 802.11 MAC-protocol, often used in ad-hoc networks, has the tendency to share the capacity equally amongst the active nodes, irrespective of their loads. An inherent drawback of this fair-sharing policy is that a node that serves as a relay-node for multiple flows is likely to become a bott 15. A Fuzzy Logic Approach to Beaconing for Vehicular Ad hoc Networks Ghafoor, Kayhan Zrar; Bakar, Kamalrulnizam Abu; van Eenennaam, Martijn; Khokhar, Rashid Hafeez; Gonzalez, Alberto J. Vehicular Ad Hoc Network (VANET) is an emerging field of technology that allows vehicles to communicate together in the absence of fixed infrastructure. The basic premise of VANET is that in order for a vehicle to detect other vehicles in the vicinity. This cognizance, awareness of other vehicles, 16. Facilitating community building in Learning Networks through peer tutoring in ad hoc transient communities Kester, Liesbeth; Sloep, Peter; Van Rosmalen, Peter; Brouns, Francis; Koné, Malik; Koper, Rob 2006-01-01 De volledige referentie is: Kester, L., Sloep, P. B., Van Rosmalen, P., Brouns, F., Koné, M., & Koper, R. (2007). Facilitating Community Building in Learning Networks Through Peer-Tutoring in Ad Hoc Transient Communities. International Journal of Web based Communities, 3(2), 198-205. 17. Trust Management in Mobile Ad Hoc Networks for Bias Minimization and Application Performance Maximization 2014-02-26 Reliability and Maintainability Symposium, Anaheim, California , USA, 1994, pp. 442–448. [46] I.R. Chen, F.B. Bastani, T.W. Tsao, On the reliability of AI...Cunha, O.C. Duarte , G. Pujolle, Trust management in mobile ad hoc networks using a scalable maturity- based model, IEEE Trans. Netw. Serv. Manage. 7 18. Sybil Attack on Lowest Id Clustering Algorithm in The Mobile Ad Hoc Network Manu Sood 2012-10-01 Full Text Available It is quite a challenging task to achieve security in a mobile ad hoc network because of its open nature,dynamically changing topology, lack of infrastructure and central management. A particular harmfulattack that takes the advantage of these characteristics is the Sybil attack, in which a malicious nodeillegitimately claims multiple identities. This attack can exceedingly disrupt various operations of themobile ad hoc networks such as data aggregation, voting, fair resource allocation scheme, misbehaviordetection and routing mechanisms etc. Two routing mechanisms known to be vulnerable to the Sybilattack in the mobile ad hoc networks are multi-path routing and geographic routing. In addition to theserouting protocols, we show in this paper that the Sybil attack can also disrupt the head selectionmechanism of the lowest ID cluster-based routing protocol. To the best of our knowledge, this is for thefirst time that a Sybil attack is shown to disrupt this cluster based routing protocol. To achieve this, weillustrate to have introduced a category of Sybil attack in which the malicious node varies itstransmission power to create a number of virtual illegitimate nodes called Sybil nodes, for the purpose ofcommunication with legitimate nodes of the Mobile Ad Hoc Network. The variation in the transmissionpower makes the Sybil attack more deadly and difficult to be detected. 19. Performance evaluation of fingerprint image processing for high Security Ad-hoc network P.Velayutham 2010-06-01 Full Text Available With the rapid development of wireless technology, various mobile devices have been developed for military and civilian applications. Defense research and development has shown increasing interest in ad-hoc networks because a military has to be mobile peer-to-peer is a good architecture for mobile communication in coalition operations. In this paper, the methodology proposed is an novel robust approach on secure fingerprint authentication and matching techniques to implement in ad-hoc wireless networks. This is a difficult problem in ad-hoc network, as it involves bootstrapping trust between the devices. This journal would present a solution, which provides fingerprint authentication techniques to share their communication in ad-hoc network. In this approach, devices exchange a corresponding fingerprint with master device for mutual communication, which will then allow them to complete an authenticated key exchange protocol over the wireless link. The solution based on authenticating user fingerprint through the master device, and this master device handshakes with the corresponding slave device for authenticating the fingerprint all attacks on the wireless link, and directly captures the user's device that was proposed to talk to a particular unknown device mentioned previously in their physical proximity. The system is implemented in C# and the user node for a variety of different devices with Matlab. 20. A Layer-Cluster Key Agreement Protocol for Ad Hoc Networks ZHANG Li-ping; CUI Guo-hua; LEI Jian-yun; XU Jing-fang; LU She-jie 2008-01-01 Mobile ad hoc networks create additional challenges for implementing the group key establishment due to resource constraints on nodes and dynamic changes on topology. The nodes in mobile ad hoc networks are usually low power devices that run on battery power. As a result, the costs of the node resources should be minimized when constructing a group key agreement protocol so that the battery life could be prolonged. To achieve this goal, in this paper we propose a security efficient group key agreement protocol based on Burmester-Desmedt (BD) scheme and layer-cluster group model, referred to as LCKM-BD, which is appropriate for large mobile ad hoc networks. In the layer-cluster group model, BD scheme is employed to establish group key, which can not only meet security demands of mobile ad hoc networks but also improve executing performance. Finally, the proposed protocol LCKM-BD are compared with BD, TGDH (tree-based group Diffe-Hellman), and GDH (group Diffie-Hellman) group key agreement protocols. The analysis results show that our protocol can significantly decrease both the computational overhead and communication costs with respect to these comparable protocols. 1. Multiplayer Game for DDoS Attacks Resilience in Ad Hoc Networks Mikalas, Antonis; Komninos, Nikos; Prasad, Neeli R. 2011-01-01 This paper proposes a multiplayer game to prevent Distributed Denial of Service attack (DDoS) in ad hoc networks. The multiplayer game is based on game theory and cryptographic puzzles. We divide requests from nodes into separate groups which decreases the ability of malicious nodes to cooperate...... with one another in order to effectively make a DDoS attack. Finally, through our experiments we have shown that the total overhead of the multiplayer game as well as the the total time that each node needs to be served is affordable for devices that have limited resources and for environments like ad hoc... 2. Multiplayer Game for DDoS Attacks Resilience in Ad Hoc Networks Mikalas, Antonis; Komninos, Nikos; Prasad, Neeli R. 2011-01-01 This paper proposes a multiplayer game to prevent Distributed Denial of Service attack (DDoS) in ad hoc networks. The multiplayer game is based on game theory and cryptographic puzzles. We divide requests from nodes into separate groups which decreases the ability of malicious nodes to cooperate...... with one another in order to effectively make a DDoS attack. Finally, through our experiments we have shown that the total overhead of the multiplayer game as well as the the total time that each node needs to be served is affordable for devices that have limited resources and for environments like ad hoc... Cempaka Wangi, N.I.; Prasad, R.V.; Jacobsson, M.; Niemegeers, I. 2008-01-01 With the advent of smaller devices having higher computational capacity and wireless communication capabilities, the world is becoming completely networked. Although, the mobile nature of these devices provides ubiquitous services, it also poses many challenges. In this article, we look in depth at 4. Credible Mobile and Ad Hoc Network Simulation-Based Studies 2006-10-26 packet radio. In Proceed- ings of the 9th ARRL /CRRL Amateur Radio Computer Networking Conference, pages 134-140, 1990. [45] H. Kee. NAM support for...Selected Areas in Communications (JSAC), pages 1335-1346, 2004. [102] P. Welch. The Computer Performance Modeling Handbook , chapter The Statis- tical 5. Computing Nash Equilibrium in Wireless Ad Hoc Networks Bulychev, Peter E.; David, Alexandre; Larsen, Kim G. 2012-01-01 This paper studies the problem of computing Nash equilibrium in wireless networks modeled by Weighted Timed Automata. Such formalism comes together with a logic that can be used to describe complex features such as timed energy constraints. Our contribution is a method for solving this problem us... 6. The Cost of Parameterized Reachability in Mobile Ad Hoc Networks Delzanno, Giorgio; Traverso, Riccardo; Zavattaro, Gianluigi 2012-01-01 We investigate the impact of spontaneous movement in the complexity of verification problems for an automata-based protocol model of networks with selective broadcast communication. We first consider reachability of an error state and show that parameterized verification is decidable with polynomial complexity. We then move to richer queries and show how the complexity changes when considering properties with negation or cardinality constraints. 7. Cross-Layer Design Approach for Power Control in Mobile Ad Hoc Networks A. Sarfaraz Ahmed 2015-03-01 Full Text Available In mobile ad hoc networks, communication among mobile nodes occurs through wireless medium The design of ad hoc network protocol, generally based on a traditional “layered approach”, has been found ineffective to deal with receiving signal strength (RSS-related problems, affecting the physical layer, the network layer and transport layer. This paper proposes a design approach, deviating from the traditional network design, toward enhancing the cross-layer interaction among different layers, namely physical, MAC and network. The Cross-Layer design approach for Power control (CLPC would help to enhance the transmission power by averaging the RSS values and to find an effective route between the source and the destination. This cross-layer design approach was tested by simulation (NS2 simulator and its performance over AODV was found to be better. 8. An Optimal CDS Construction Algorithm with Activity Scheduling in Ad Hoc Networks 2015-01-01 A new energy efficient optimal Connected Dominating Set (CDS) algorithm with activity scheduling for mobile ad hoc networks (MANETs) is proposed. This algorithm achieves energy efficiency by minimizing the Broadcast Storm Problem [BSP] and at the same time considering the node's remaining energy. The Connected Dominating Set is widely used as a virtual backbone or spine in mobile ad hoc networks [MANETs] or Wireless Sensor Networks [WSN]. The CDS of a graph representing a network has a significant impact on an efficient design of routing protocol in wireless networks. Here the CDS is a distributed algorithm with activity scheduling based on unit disk graph [UDG]. The node's mobility and residual energy (RE) are considered as parameters in the construction of stable optimal energy efficient CDS. The performance is evaluated at various node densities, various transmission ranges, and mobility rates. The theoretical analysis and simulation results of this algorithm are also presented which yield better results. PMID:26221627 9. Relay movement control for maintaining connectivity in aeronautical ad hoc networks 李杰; 孙志强; 师博浩; 宫二玲; 谢红卫 2016-01-01 As a new sort of mobile ad hoc network (MANET), aeronautical ad hoc network (AANET) has fleet-moving airborne nodes (ANs) and suffers from frequent network partitioning due to the rapid-changing topology. In this work, the additional relay nodes (RNs) is employed to repair the network and maintain connectivity in AANET. As ANs move, RNs need to move as well in order to re-establish the topology as quickly as possible. The network model and problem definition are firstly given, and then an online approach for RNs’ movement control is presented to make ANs achieve certain connectivity requirement during run time. By defining the minimum cost feasible moving matrix (MCFM), a fast algorithm is proposed for RNs’ movement control problem. Simulations demonstrate that the proposed algorithm outperforms other control approaches in the highly-dynamic environment and is of great potential to be applied in AANET. 10. A Distributed Protocol for Detection of Packet Dropping Attack in Mobile Ad Hoc Networks Sen, Jaydip; Balamuralidhar, P; G., Harihara S; Reddy, Harish 2011-01-01 In multi-hop mobile ad hoc networks (MANETs),mobile nodes cooperate with each other without using any infrastructure such as access points or base stations. Security remains a major challenge for these networks due to their features of open medium, dynamically changing topologies, reliance on cooperative algorithms, absence of centralized monitoring points, and lack of clear lines of defense. Among the various attacks to which MANETs are vulnerable, malicious packet dropping attack is very common where a malicious node can partially degrade or completely disrupt communication in the network by consistently dropping packets. In this paper, a mechanism for detection of packet dropping attack is presented based on cooperative participation of the nodes in a MANET. The redundancy of routing information in an ad hoc network is utilized to make the scheme robust so that it works effectively even in presence of transient network partitioning and Byzantine failure of nodes. The proposed scheme is fully cooperative an... 11. PNNI routing support for ad hoc mobile networking: A flat architecture Martinez, L.; Sholander, P.; Tolendino, L. 1997-12-01 This contribution extends the Outside Nodal Hierarchy List (ONHL) procedures described in ATM Form Contribution 97-0766. These extensions allow multiple mobile networks to form either an ad hoc network or an extension of a fixed PNNI infrastructure. This contribution covers the simplest case where the top-most Logical Group Nodes (LGNs), in those mobile networks, all reside at the same level in a PNNI hierarchy. Future contributions will cover the general case where those top-most LGNs reside at different hierarchy levels. This contribution considers a flat ad hoc network architecture--in the sense that each mobile network always participates in the PNNI hierarchy at the preconfigured level of its top-most LGN. 12. PNNI routing support for ad hoc mobile networking: The multilevel case Martinez, L.; Sholander, P.; Tolendino, L. 1998-01-01 This contribution extends the Outside Nodal Hierarchy List (ONHL) procedures described in ATM Forum Contributions 97-0766 and 97-0933. These extensions allow multiple mobile networks to form either an ad hoc network or an extension of a fixed PNNI infrastructure. A previous contribution (97-1073) covered the simplest case where the top-most Logical Group Nodes (LGNs), in those mobile networks, all resided at the same level in a PNNI hierarchy. This contribution covers the more general case wherein those top-most LGNs may reside at different PNNI hierarchy levels. Both of the SNL contributions consider flat ad hoc network architectures in the sense that each mobile network always participates in the PNNI hierarchy at the pre-configured level of its top-most LGN. 13. Fuzzy Multiple Metrics Link Assessment for Routing in Mobile Ad-Hoc Network Soo, Ai Luang; Tan, Chong Eng; Tay, Kai Meng 2011-06-01 In this work, we investigate on the use of Sugeno fuzzy inference system (FIS) in route selection for mobile Ad-Hoc networks (MANETs). Sugeno FIS is introduced into Ad-Hoc On Demand Multipath Distance Vector (AOMDV) routing protocol, which is derived from its predecessor, Ad-Hoc On Demand Distance Vector (AODV). Instead of using the conventional way that considering only a single metric to choose the best route, our proposed fuzzy decision making model considers up to three metrics. In the model, the crisp inputs of the three parameters are fed into an FIS and being processed in stages, i.e., fuzzification, inference, and defuzzification. Finally, after experiencing all the stages, a single value score is generated from the combination metrics, which will be used to measure all the discovered routes credibility. Results obtained from simulations show a promising improvement as compared to AOMDV and AODV. 14. Performance Evaluation of Reactive Protocols for Ad Hoc Wireless Sensor Network Rashmi A Bichkar 2012-10-01 Full Text Available The requirement for good Quality of Service in Mobile Ad Hoc Network is that, better protocols should be used. To improve protocol efficiency, the two key issues to be considered are, low control overhead and low energy consumption. For reducing energy consumption and routing overhead, an enhanced routing algorithm, EEDSR (Energy Efficient Dynamic Source Routing with local route enhancement model for DSR (Dynamic Source Routing is implemented. Comparisons based on routing overhead, energy and throughput is done between EEDSR and EEAODV (Energy Efficient Ad Hoc on Demand Distance Vector and AODV (Ad Hoc on Demand Distance Vector protocols. For all protocols, NS-2.34 Simulator is used. This paper presents the simulation results in order to choose the best routing protocol to give highest performance. The simulations have shown that EEDSR protocol performs well as it consumes 12�0less energy than EEAODV and AODV 15. Computing Nash Equilibrium in Wireless Ad Hoc Networks Bulychev, Peter E.; David, Alexandre; Larsen, Kim G. 2012-01-01 This paper studies the problem of computing Nash equilibrium in wireless networks modeled by Weighted Timed Automata. Such formalism comes together with a logic that can be used to describe complex features such as timed energy constraints. Our contribution is a method for solving this problem...... using Statistical Model Checking. The method has been implemented in UPPAAL model checker and has been applied to the analysis of Aloha CSMA/CD and IEEE 802.15.4 CSMA/CA protocols.... 16. Enabling content distribution in vehicular ad hoc networks Luan, Tom H; Bai, Fan 2014-01-01 This SpringerBrief presents key enabling technologies and state-of-the-art research on delivering efficient content distribution services to fast moving vehicles. It describes recent research developments and proposals towards the efficient, resilient and scalable content distribution to vehicles through both infrastructure-based and infrastructure-less vehicular networks. The authors focus on the rich multimedia services provided by vehicular environment content distribution including vehicular communications and media playback, giving passengers many infotainment applications. Common problem 17. Multiple Metrics Gateway Selection Scheme in Mobile Ad Hoc Network (MANET) and Infrastructure Network Integration Setiawan, Fudhiyanto Pranata; Bouk, Safdar H.; Sasase, Iwao This paper proposes a scheme to select an appropriate gateway based on multiple metrics such as remaining energy, mobility or speed, and number of hops in Mobile Ad Hoc Network (MANET) and the infrastructure network integration. The Multiple Criteria Decision Making (MCDM) method called Simple Additive Weighting (SAW) is used to rank and to select the gateway node. SAW method calculates the weights of gateway node candidates by considering these three metrics. The node with the highest weight will be selected as the gateway. Simulation results show that our scheme can reduce the average energy consumption of MANET nodes, and improve throughput performance, gateway lifetime, Packet Delivery Ratio (PDR) of the MANET and the infrastructure network. 18. A Survey on Trust Management for Mobile Ad Hoc Networks 2010-07-01 measure trust of other nodes when they cooperate with each other to detect malicious nodes. Albers et al. [95] proposed a general architecture for an...Issues and Evaluation Considerations,” RFC 2501, Jan . 1999. [2] J. Jubin and J. Tornow, “The DARPA Packet Radio Network Protocols,” Proc. IEEE, vol. 75...no. 1, Jan . 1987, pp. 21-32. [3] A. J. Tardiff and J.W. Gowens, Editors, “ARL Advanced Telecom- munication and Information Distribution Research 19. Security Challenges in Multicast Communication for Mobile Ad Hoc Network S. Gunasekaran 2010-01-01 Full Text Available Problem statement: Multicasting communication network accepted a single message from an application and delivered copies of the message to multiple recipients at different locations. Recently, there has been an explosion of research literature on multicast communication environment. The objective of this study were to contribute the complexity of supporting current multicast applications, (i the lack of reliable multicast transport mechanisms at the network level and (ii the lack of network support for large scale multicast communication. The scaling problem of secure multicast key distribution compounded for the case where sender-specific keys need to be distributed to a group and required for sender-specific authentication of data traffic and minimize control overhead (iii compare RC4, AES-128,RS(2 and RS(3 computation time of both algorithms. Approach: Algorithms were collected and performed computation time. In general the multicast key distribution scheme implemented for distributing 128 bit session keys. Thus the Maximum Distance Separable Codes (MDS Codes needed for their encoding and decoding process. In rekeying scheme errors were occurred during over period of time or at a particular point of time and to eliminate all these errors in the level of encryption and decryption mechanism. The MDS codes played an important role in providing security services for multicast, such as traffic, integrity, authentication and confidentiality, is particularly problematic since it requires securely distributing a group (session key to each of a group’s receivers. Results: First we showed that internet multicasting algorithms based on reverse path forwarding were inherently unreliable and present a source-tree-based reliable multicasting scheme also. The new scheme proposed and used as an inter-gateway protocol and worked on top of the previously developed distance vector and link state internet routing schemes. Next, to support large scale 20. Ad hoc networking scheme for mobile cyber-physical systems 2017-08-17 Embodiments of the present disclosure provide techniques for packet routing. In an embodiment, when a transmitting communication device injects a packet into a communication network, a receiving communication device that is closer to a sink or destination than the transmitting communication device relays the packet in a first hop. In a subsequent hop, a receiving communication device evaluates position information conveyed by the transmitting communication device of the first hop to determine whether to forward the packet. Accordingly, a receiving communication device receiver that offers progress towards the sink can elect to forward the packet. 1. Survivability analysis of wireless Ad hoc network using stochastic reward nets ZHAO Jing; CUI Gang; LIU Hong-wei; WANG Hui-qiang 2008-01-01 To provide services in presence of failures or attacks in a timely manner, the network survivability was analyzed. Based on stochastic Petri nets, we put forward an effective model for ad hec network and adopt a two-phase approach consisting of the steady-state availability analysis and the system transient performance anal-ysis, then provide a quantitative approach for analysis of the network survivability. The results show that the proposed model is useful for the design and evaluation of the wireless ad hoc network. 2. Condensation-Based Routing in Mobile Ad-Hoc Networks Francesco Palmieri 2012-01-01 Full Text Available The provision of efficient broadcast containment schemes that can dynamically cope with frequent topology changes and limited shared channel bandwidth, is one of the most challenging research topics in MANETs, and is crucial to the basic operations of networks serving fully mobile devices within areas having no fixed communication infrastructure. This problem particularly impacts the design of dynamic routing protocol that can efficiently establish routes to deliver data packets among mobile nodes with minimum communication overhead, and at the same time, ensure high throughput and low end-to-end delay. Accordingly, this work exploits and analyzes an adaptive probabilistic broadcast containment technique based on a particular condensation phenomenon borrowed from Quantum Mechanics and transposed in self-organizing random networks, that has the potential to effectively drive the on-demand route discovery process. Simulation-based performance analysis has shown that the proposed technique can introduce significant benefits on the general performance of broadcast-based reactive routing protocols in MANETs. 3. Comparison of Analytical and Measured Performance Results on Network Coding in IEEE 802.11 Ad-Hoc Networks Zhao, Fang; Médard, Muriel; Hundebøll, Martin 2012-01-01 Network coding is a promising technology that has been shown to improve throughput in wireless mesh networks. In this paper, we compare the analytical and experimental performance of COPE-style network coding in IEEE 802.11 ad-hoc networks. In the experiments, we use a lightweight scheme called... 4. A Comprehensive Performance Comparison of On-Demand Routing Protocols in Mobile Ad-Hoc Networks Khan, Jahangir; Hayder, Syed Irfan Mobile ad hoc network is an autonomous system of mobile nodes connected by wireless links. Each node operates not only as an end system, but also as a router to forward packets. The nodes are free to move about and organize themselves on a fly. In this paper we focus on the performance of the on-demand routing protocols such as DSR and AODV in ad-hoc networks. We have observed the performance change of each protocol through simulation with varying the data in intermediate nodes and to compare data throughput in each mobile modes of each protocol to analyze the packet fraction for application data. The objective of this work is to evaluate two routing protocols such as On-demand behavior, namely, Ad hoc Demand Distance vector (AODV) and Dynamic Source Routing (DSR), for wireless ad hoc networks based on performance of intermediate nodes for the delivery of data form source to destination and vice versa in order to compare the efficiency of throughput in the neighbors nodes. To overcome we have proposed OPNET simulator for performance comparison of hop to hop delivery of data packet in autonomous system. 5. Survey: Comparison Estimation of Various Routing Protocols in Mobile Ad-Hoc Network Priyanshu 2014-06-01 Full Text Available MANET is an autonomous system of mobile nodes attached by wireless links. It represents a complex and dynamic distributed systems that consist of mobile wireless nodes that can freely self organize into an ad-hoc network topology. The devices in the network may have limited transmission range therefore multiple hops may be needed by one node to transfer data to another node in network. This leads to the need for an effective routing protocol. In this paper we study various classifications of routing protocols and their types for wireless mobile ad-hoc networks like DSDV, GSR, AODV, DSR, ZRP, FSR, CGSR, LAR, and Geocast Protocols. In this paper we also compare different routing protocols on based on a given set of parameters Scalability, Latency, Bandwidth, Control-overhead, Mobility impact. 6. Distributed QoS multicast routing protocol in ad hoc networks Sun Baolin; Li Layuan 2006-01-01 Quality of service (QoS) routing and multicasting protocols in ad hoc networks are face with the challenge of delivering data to destinations through multihop routes in the presence of node movements and topology changes. The multicast routing problem with multiple QoS constraints is discussed, which may deal with the delay, bandwidth and cost metrics, and describes a network model for researching the ad hoc networks QoS multicast routing problem. It presents a distributed QoS multicast routing protocol (DQMRP). The proof of correctness and complexity analysis of the DQMRP are also given. Simulation results show that the multicast tree optimized by DQMRP is better than other protocols and is fitter for the network situations with frequently changed status and the real-time multimedia application. It is an available approach to multicast routing decision with ultiple QoS constraints. 7. SD-AODV: A Protocol for Secure and Dynamic Data Dissemination in Mobile Ad Hoc Network Nath, Rajender 2011-01-01 Security remains as a major concern in the mobile ad hoc networks. This paper presents a new protocol SD-AODV, which is an extension of the exiting protocol AODV. The proposed protocol is made secure and dynamic against three main types of routing attacks- wormhole attack, byzantine attack and blackhole attack. SD-AODV protocol was evaluated through simulation experiments done on Glomosim and performance of the network was measured in terms of packet delivery fraction, average end-to-end delay, global throughput and route errors of a mobile ad hoc network where a defined percentage of nodes behave maliciously. Experimentally it was found that the performance of the network did not degrade in the presence of the above said attacks indicating that the proposed protocol was secure against these attacks. 8. Multi-Hop Bandwidth Management Protocol for Mobile Ad Hoc Networks Pattanayak, Binod Kumar; Jagadev, Alok Kumar; Nayak, Manojranjan 2010-01-01 An admission control scheme should play the role of a coordinator for flows in a data communication network, to provide the guarantees as the medium is shared. The nodes of a wired network can monitor the medium to know the available bandwidth at any point of time. But, in wireless ad hoc networks, a node must consume the bandwidth of neighboring nodes, during a communication. Hence, the consumption of bandwidth by a flow and the availability of resources to any wireless node strictly depend upon the neighboring nodes within its transmission range. We present a scalable and efficient admission control scheme, Multi-hop Bandwidth Management Protocol (MBMP), to support the QoS requirements in multi-hop ad hoc networks. We simulate several options to design MBMP and compare the performances of these options through mathematical analysis and simulation results, and compare its effectiveness with the existing admission control schemes through extensive simulations. KEYWORDS 9. Multipath Routing Protocol for Effective Local Route Recovery in Mobile Ad hoc Network S. K. Srivatsa 2012-01-01 Full Text Available Problem statement: In mobile ad hoc networks, frequent mobility during the transmission of data causes route failure which results in route rediscovery. In this, we propose multipath routing protocol for effective local route recovery in Mobile Ad hoc Networks (MANET. In this protocol, each source and destination pair establishes multiple paths in the single route discovery and they are cached in their route caches. Approach: The cached routes are sorted on the basis of their bandwidth availability. In case of route failure in the primary route, a recovery node which is an overhearing neighbor, detects it and establishes a local recovery path with maximum bandwidth from its route cache. Results: By simulation results, we show that the proposed approach improves network performance. Conclusion: The proposed route recovery management technique prevents the frequent collision and degradation in the network performance. 10. GBP-WAHSN: A Group-Based Protocol for Large Wireless Ad Hoc and Sensor Networks Jaime Lloret; Miguel Garcia; Jesus Tomás; Fernando Boronat 2008-01-01 Grouping nodes gives better performance to the whole network by diminishing the average network delay and avoiding unnecessary message for warding and additional overhead. Many routing protocols for ad-hoc and sensor network shave been designed but none of them are based on groups. In this paper, we will start defining group-based topologies,and then we will show how some wireless ad hoc sensor networks (WAHSN) routing protocols perform when the nodes are arranged in groups. In our proposal connections between groups are established as a function of the proximity of the nodes and the neighbor's available capacity (based on the node's energy). We describe the architecture proposal, the messages that are needed for the proper operation and its mathematical description. We have also simulated how much time is needed to propagate information between groups. Finally, we will show a comparison with other architectures. 11. Bandwidth Estimation Problem & Solutions in IEEE 802.11 based Ad Hoc Networks Neeraj Gupta 2012-11-01 Full Text Available With the rise in multimedia applications in ad hoc networks it is necessary to ensure the quality of service support from network. The routers which may be mobile nodes in ad hoc networks should be able to evaluate the resources available in the network, prior to offering guarantees on delay, bandwidth or any other metric. Estimating the available bandwidth is often required before performing admission control, flows management, congestion control or routing based on bandwidth constraints so that before any new flow is admitted the existing flow does not degrade. Lot of work in terms of various tools and techniques has been proposed to evaluate the available bandwidth in last decade; no consensus has yet been arrived. We present a comprehensive review on the various state of art work proposed carried out in this area 2011-02-01 Full Text Available A wireless ad-hoc network comprises of a set of wireless nodes and requires no fixed infrastructure. Forefficient communication between nodes, ad-hoc networks are typically grouped in to clusters, whereeach cluster has a clusterhead (or Master. In our study, we will take a communication model that isderived from that of BlueTooth. Clusterhead nodes are responsible for the formation of clusters eachconsisting of a number of nodes (analog to cells in a cellular network and maintenance of the topologyof the network. Consequently, the clusterhead tend to become potential points of failures and naturally,there will be load imbalanced. Thus, it is important to consider load balancing in any clusteringalgorithm. In this paper, we consider the situation when each node has some load, given by theparameter forwarding Index. 13. A Novel Approach to Modeling and Flooding in Ad-hoc Wireless Networks 2008-01-01 Full Text Available This study proposes a new modeling approach for wireless ad-hoc networks. The new approach is based on the construction of fuzzy neighborhoods and essentially consists of assigning a membership or importance degree to each network radio link which reflects the relative quality of this link. This approach is first used to model the flooding problem and then an algorithm is proposed to solve this problem which is of a great importance in ad-hoc wireless networks intrinsically subject to a certain level of node mobility. Simulations carried out in a dynamic environment show promising results and stability compared to the enhanced dominant pruning algorithm. Such an approach is suitable to take into account the volatile aspect of radio links and the physical layer uncertainty when modeling these networks, particularly when the physical layer offers no or insufficient guaranties to high-level protocols as for the flooding. 14. PERFORMANCE ANALYSIS OF ON-DEMAND ROUTING PROTOCOLS FOR VEHICULAR AD-HOC NETWORKS A. Shastri 2011-09-01 Full Text Available Vehicular Ad Hoc Networks (VANETs are a peculiar subclass of mobile ad hoc networks that raise anumber of technical challenges, especially from the point of view of their mobility models. Currently, thefield of VANETs has gained an important part of the interest of researchers and become very popular.More specifically, VANETs can operate without fixed infrastructure and can survive rapid changes in thenetwork topology. The main method for evaluating the performance of routing protocols for VANETs byNetwork Simulator-2.34. This paper is subjected to the on-demand routing protocols with identical loadsand evaluates their relative performance with respect to the two performance context: average End-to-End delay and packet delivery ratio. We investigated various simulation scenarios with varying pausetimes, connections and no. of nodes particularly for AODV and DSR. We will also discuss briefly aboutthe feasibility of VANETs in respect of Indian automotive networks. 15. SD-AODV: A Protocol for Secure and Dynamic Data Dissemination in Mobile Ad Hoc Network Rajender Nath 2010-11-01 Full Text Available Security remains as a major concern in the mobile ad hoc networks. This paper presents a new protocol SD-AODV, which is an extension of the exiting protocol AODV. The proposed protocol is made secure and dynamic against three main types of routing attacks-wormhole attack, byzantine attack and blackhole attack. SD-AODV protocol was evaluated through simulation experiments done on Glomosim and performance of the network was measured in terms of packet delivery fraction, average end-to-end delay, global throughput and route errors of a mobile ad hoc network where a defined percentage of nodes behave maliciously. Experimentally it was found that the performance of the network did not degrade in the presence of the above said attacks indicating that the proposed protocol was secure against these attacks. 16. Enhanced Secure Trusted AODV (ESTA Protocol to Mitigate Blackhole Attack in Mobile Ad Hoc Networks Dilraj Singh 2015-09-01 Full Text Available The self-organizing nature of the Mobile Ad hoc Networks (MANETs provide a communication channel anywhere, anytime without any pre-existing network infrastructure. However, it is exposed to various vulnerabilities that may be exploited by the malicious nodes. One such malicious behavior is introduced by blackhole nodes, which can be easily introduced in the network and, in turn, such nodes try to crumble the working of the network by dropping the maximum data under transmission. In this paper, a new protocol is proposed which is based on the widely used Ad hoc On-Demand Distance Vector (AODV protocol, Enhanced Secure Trusted AODV (ESTA, which makes use of multiple paths along with use of trust and asymmetric cryptography to ensure data security. The results, based on NS-3 simulation, reveal that the proposed protocol is effectively able to counter the blackhole nodes in three different scenarios. 17. MULTICAST ROUTING WITH QUALITY OF SERVICE CONSTRAINTS IN THE AD HOC WIRELESS NETWORKS Abdellah Idrissi 2014-01-01 Full Text Available The recent multimedia applications and services are very demanding in terms of Quality of Service (QoS. This creates new challenges in ensuring QoS when delivering those services over wireless networks. Motivated by the need of supporting high quality multicast applications in wireless ad hoc networks, we propose a network topology that can minimize the power when connecting the source node to the destination nodes in multicast sessions with the respect of the QoS provisions. We formulated the problem as integer linear programming problem with a set of energy and QoS constraints. We minimize the total power of energy used by nodes while satisfying QoS constraints (Bandwidth and maximum delay that are crucial to wireless ad hoc network performance. 18. Formal reconstruction of attack scenarios in mobile ad hoc and sensor networks Rekhis Slim 2011-01-01 Full Text Available Abstract Several techniques of theoretical digital investigation are presented in the literature but most of them are unsuitable to cope with attacks in wireless networks, especially in Mobile Ad hoc and Sensor Networks (MASNets. In this article, we propose a formal approach for digital investigation of security attacks in wireless networks. We provide a model for describing attack scenarios in a wireless environment, and system and network evidence generated consequently. The use of formal approaches is motivated by the need to avoid ad hoc generation of results that impedes the accuracy of analysis and integrity of investigation. We develop an inference system that integrates the two types of evidence, handles incompleteness and duplication of information in them, and allows possible and provable actions and attack scenarios to be generated. To illustrate the proposal, we consider a case study dealing with the investigation of a remote buffer overflow attack. 19. Protocols for Detection and Removal of Wormholes for Secure Routing and Neighborhood Creation in Wireless Ad Hoc Networks Hayajneh, Thaier Saleh 2009-01-01 Wireless ad hoc networks are suitable and sometimes the only solution for several applications. Many applications, particularly those in military and critical civilian domains (such as battlefield surveillance and emergency rescue) require that ad hoc networks be secure and stable. In fact, security is one of the main barriers to the extensive use… 20. Cross-Layer Service Discovery Mechanism for OLSRv2 Mobile Ad Hoc Networks M. Isabel Vara 2015-07-01 1. Cross-Layer Service Discovery Mechanism for OLSRv2 Mobile Ad Hoc Networks. Vara, M Isabel; Campo, Celeste 2015-07-20 2. New horizons in mobile and wireless communications, v.4 ad hoc networks and pans 2009-01-01 Based on cutting-edge research projects in the field, this book (part of a comprehensive 4-volume series) provides the latest details and covers the most impactful aspects of mobile, wireless, and broadband communications development. These books present key systems and enabling technologies in a clear and accessible manner, offering you a detailed roadmap the future evolution of next generation communications. Other volumes cover Networks, Services and Applications; Reconfigurability; and Ad Hoc Networks. 3. Multipath Routing for Self-Organizing Hierarchical Mobile Ad-Hoc Networks – A Review Udayachandran Ramasamy; Sankaranarayanan, K. 2010-01-01 Security has become a primary concern for providing protected communication between mobile nodes in a hostile environment. The characteristics of Ad-hoc networks (dynamic topology, infrastructure less, variable capacity links, etc) are origin of many issues. Limited bandwidth, energy constraints, high cost security are the encountered problems. This type of networks pose particular challenges in terms of Quality of Service (QoS) and performance. In this paper, the issues of multipath routing ... 4. Transmission Capacity of Wireless Ad Hoc Networks with Energy Harvesting Nodes Vaze, Rahul 2012-01-01 Transmission capacity of an ad hoc wireless network is analyzed when each node of the network harvests energy from nature, e.g. solar, wind, vibration etc. Transmission capacity is the maximum allowable density of nodes, satisfying a per transmitter-receiver rate, and an outage probability constraint. Energy arrivals at each node are assumed to follow a Bernoulli distribution, and each node stores energy using an energy buffer/battery. For ALOHA medium access protocol (MAP), optimal transmiss... 5. SWIPT in 3-D Bipolar Ad Hoc Networks with Sectorized Antennas Krikidis, Ioannis 2016-01-01 In this letter, we study the simultaneous wireless information and power transfer (SWIPT) concept in 3-D bipolar ad hoc networks with spatial randomness. Due to three spatial dimensions of the network, we introduce a 3-D antenna sectorization that exploits the horizontal and the vertical spatial separation. The impact of 3-D antenna sectorization on SWIPT performance is evaluated for the power-splitting technique by using stochastic geometry tools. Theoretical and numerical results show that ... 6. ECDSA - Performance improvements of intrusion detection in Mobile Ad-hoc Networks Vijayakumar R 2015-10-01 7. E.jahani 2012-06-01 Full Text Available Ad hoc networks have been of great interest among scholars of the field, due to their flexibility, quick setup,high potentiality, and also their application in the battle field, fire, earthquakes, where there is no hope to setup infrastructure networks. network dynamic, high mobility of nodes, the nature of broadcast communication ,shortdurability of mobile devices batteries, transmission errors and as a result packet loss and limited bandwidth of bands, all cause the routing in those networks to be more difficult than the other networks. 8. Guard against cooperative black hole attack in Mobile Ad-Hoc Network Harsh Pratap Singh 2011-07-01 Full Text Available A mobile ad-hoc network is an autonomous network that consists of nodes which communicate with each other with wireless channel. Due to its dynamic nature and mobility of nodes, mobile ad hoc networks are more vulnerable to security attack than conventional wired and wireless networks. One of the principal routing protocols AODV used in MANETs. The security of AODV protocol is influence by the particular type of attack called Black Hole attack. In a black hole attack, a malicious node injects a faked route reply claiming to havethe shortest and freshest route to the destination. However, when the data packets arrive, the malicious node discards them. To preventing black hole attack, this paper presents RBS (Reference Broadcast Synchronization & Relative velocity distance method for clock synchronization process in Mobile ad-hoc Network for removal of cooperative black hole node. This paper evaluates the performance in NS2 network simulator and our analysis indicates that this method is very suitable to remove black hole attack. 9. ENERGY EFFICIENT ROUTING PROTOCOLS FOR WIRELESS AD HOC NETWORKS – A SURVEY K. Sankar 2012-06-01 Full Text Available Reducing energy consumption, primarily with the goal of extending the lifetime of battery-powered devices, has emerged as a fundamental challenge in wireless communication. The performance of the medium access control (MAC scheme not only has a fairly significant end-result on the behaviour of the routing approach employed, but also on the energy consumption of the wireless network interface card (NIC. We investigate the inadequacies of the MAC schemes designed for ad hoc wireless networks in the context of power awareness herein. The topology changes due to uncontrollable factors such as node mobility, weather, interference, noise, as well as on controllable parameters such as transmission power and antenna direction results in significant amount of energy loss. Controlling rapid topology changes by minimizing the maximum transmission power used in ad hoc wireless networks, while still maintaining networks connectivity can prolong battery life and hence network lifetime considerably. In addition, we systematically explore the potential energy consumption pitfalls of non–power-based and power based routing schemes. We suggest a thorough energy-based performance survey of energy aware routing protocols for wireless mobile ad-hoc networks. We also present the statistical performance metrics measured by our simulations. 10. An Adaptive Fuzzy Clustering and Location Management in Mobile Ad Hoc Networks Obulla Reddy 2012-11-01 Full Text Available In the typical Ad Hoc networks application, the network hosts usually perform the given task according to groups, e.g. the command and control over staff and accruement in military affairs, traffic management, etc. Therefore, it is very significant for the study of multicast routing protocols of the Ad Hoc networks. Multicast protocols in MANETs must consider control overhead for maintenance, energy efficiency of nodes and routing trees managements to frequent changes of network topology. Now-a days Multicast protocols extended with Cluster based approach. Cluster based multicast tree formation is still research issues. The mobility of nodes will always increase the communication delay because of re-clustering and cluster head selections. For this issue we evaluate Adaptive Fuzzy System (AFS to multicast communication in mobile ad hoc networks (MANETs. To evaluate the performance of AFS, we simulate the fuzzy clustering in a variety of mobile network topologies in NS-2 and compare it with Cluster-based On Demand Multicast Routing Protocol (CODMRP and Cluster-based routing protocol (CBRP. Our simulation result shows the effectiveness and efficiency of AFMR: high packet delivery ratio is achieved while the delay and overhead are the lowest. 11. Impact of Malicious Nodes under Different Route Refresh Intervals in Ad Hoc Network P. Suganthi 2012-01-01 Full Text Available Problem statement: Ad hoc networks are formed dynamically by group of mobile devices co operating with each other. Intermediate nodes between source and destination act as routers so that source node can communicate with the destination node even if it is out radio range and thus eliminating the necessity of infrastructure. Co operation of nodes is a very important feature for the successful deployment of Ad hoc networks. The intermediate nodes should not only be involved in the route discovery process but also should be involved in the re transmission of packets as an intermediate between source and destination. Approach: Since nodes have to be co operative for successful deployment of Ad hoc networks, the security mechanisms cannot afforded to be stringent which enables malicious nodes to successfully attack the network. The capability of optimized link state routing protocol has been studied extensively for different types of ad hoc networks and has been proved to behave somewhere in between pro active and reactive routing protocols. Results: In this study we investigate the impact of malicious nodes on the Optimized Link State Routing (OLSR protocol under different hello intervals which affects the route discovery process and subsequently investigate the degradation of Quality Of Service (QOS. Conclusion: It is observed that the throughput deteriorates when the network is attacked by malicious nodes which selectively retransmit data to some of the destinations. The performance degradation increases as the hello interval time is set beyond 4 sec. Higher hello interval decreases the control packet overheads. It is observed that even with higher hello intervals the network performance is much better than an attack by small group of malicious nodes. 刘巧平; 王建望 2014-01-01 移动Ad Hoc网络,它是Mobile Ad Hoc Networks的简称,它不会受到空间与时间的制约,更加快捷和方便,不但能够在危险环境、远距离、战场、会议和救援等环境当中应用,而且还能够扩展末端网络,它的应用具有普遍性。为此,本文论述了移动Ad Hoc网络的基本概念和特点,接着分析了移动Ad Hoc网络设计面临的挑战,最后讨论了它的应用。%It would not be space and time constraints,more efficient and convenient,not only in hazardous environments,remote battlefield,conferences and rescue environment in which applications,but also to extend the end of the network,its application is universal.Therefore,this article discusses the basic concepts and features of Mobile Ad Hoc Networks,and then analyzes the challenges faced by mobile Ad Hoc network design, it is best to discuss its applications. 13. Exploiting Mobile Ad Hoc Networking and Knowledge Generation to Achieve Ambient Intelligence Anna Lekova 2012-01-01 Full Text Available Ambient Intelligence (AmI joins together the fields of ubiquitous computing and communications, context awareness, and intelligent user interfaces. Energy, fault-tolerance, and mobility are newly added dimensions of AmI. Within the context of AmI the concept of mobile ad hoc networks (MANETs for “anytime and anywhere” is likely to play larger roles in the future in which people are surrounded and supported by small context-aware, cooperative, and nonobtrusive devices that will aid our everyday life. The connection between knowledge generation and communication ad hoc networking is symbiotic—knowledge generation utilizes ad hoc networking to perform their communication needs, and MANETs will utilize the knowledge generation to enhance their network services. The contribution of the present study is a distributed evolving fuzzy modeling framework (EFMF to observe and categorize relationships and activities in the user and application level and based on that social context to take intelligent decisions about MANETs service management. EFMF employs unsupervised online one-pass fuzzy clustering method to recognize nodes' mobility context from social scenario traces and ubiquitously learn “friends” and “strangers” indirectly and anonymously. 14. SURVEY ON MOBILE AD HOC NETWORK ATTACKS AND MITIGATION USING ROUTING PROTOCOLS S. P. Manikandan 2012-01-01 Full Text Available Mobile Ad hoc Networks (MANET due to its unpredictable topology and bandwidth limitations are vulnerable to attacks. Establishing security measures and finding secure routes are the major challenges faced by MANET. Security issues faced by ad hoc networks are node authentication, insider attack and intrusion detection. Implementing security measures is challenging due to the presence of limited resources in the hardware device and the network. Routing protocols attempt to mitigate the attacks by isolating the malicious nodes. In this study, a survey of various kinds of attacks against MANET is studied. It is also proposed to study modification of AODV and DSR routing protocol implementation with regard to mitigating attacks and intrusion detection. This study studied various approaches to predict and mitigate attacks in MANET." 15. LAR VS DSR: EVALUATION OF AD HOC NETWORK PROTOCOLS IN PRACTICAL MANAGEMENT OF COMMUNICATION OF ROBOTS HANIEH MOVAHEDI 2014-12-01 Full Text Available Controlling and managing of robots and their information communication to each other is an important issue, and wireless technologies without infrastructure like Ad hoc networks due to their quick trigger and cost slightness can play efficiently. Various protocols have been used in this field and in the recent study, two famous Ad hoc network protocols have been simulated for 4 km2 work areas with changes of the same elements in types of robots like speed, pause time, number of nodes, important parameters that show network optimization rate and include PDR, Throughput, End-To-End Delay by using simulation in GloMoSim software. In this research, for suitable protocols in every time, output has been calculated by making the same chance and then, obtained information was investigated statistically. In total, LAR protocol was recognized that had higher scores than DSR and could be used as an optimum protocol in robotic industries, technically. 16. A Face Centered Cubic Key Agreement Mechanism for Mobile Ad Hoc Networks Askoxylakis, Ioannis G.; Markantonakis, Konstantinos; Tryfonas, Theo; May, John; Traganitis, Apostolos Mobile ad hoc networking is an operating mode for rapid mobile node networking. Each node relies on adjacent nodes in order to achieve and maintain connectivity and functionality. Security is considered among the main issues for the successful deployment of mobile ad hoc networks (MANETs). In this paper we introduce a weak to strong authentication mechanism associated with a multiparty contributory key establishment method. The latter is designed for MANETs with dynamic changing topologies, due to continuous flow of incoming and departing nodes. We introduce a new cube algorithm based on the face-centered cubic (FCC) structure. The proposed architecture employs elliptic curve cryptography, which is considered more efficient for thin clients where processing power and energy consumption are significant constraints. 17. Designing and implementing an experimental wireless mobile ad hoc networks testbed Li, Lixin; Dai, Guanzhong; Mu, Dejun; Zhang, Huisheng 2006-11-01 A very large number of simulation models have been developed to study ad hoc network architectures and protocols under many network scenarios, number of nodes, mobility rates, etc. However, fidelity of simulation results has always been a concern, especially when the protocols being studied are affected by the propagation and interference characteristics of the radio channels. This paper describes our experience in designing and implementing a MANET prototype system, Experimental Wireless Mobile Ad hoc Networks Testbed (EWMANT), in order to perform largescale, reproducible experiments. EWMANT aims at assessing several different protocols in a real-world environment instead of by simulation. It assists us with finding and evaluating a proper solution, showing the clear advantage of realworld implementations compared to simulations. 18. Enabling Adaptive Rate and Relay Selection for 802.11 Mobile Ad Hoc Networks Mehta, Neil; Wang, Wenye 2011-01-01 19. SECURITY IN VEHICULAR AD HOC NETWORK BASED ON INTRUSION DETECTION SYSTEM Omkar Pattnaik 2014-01-01 Full Text Available Implementation of mobile ad hoc networks has eventually captured practically most of the parts of day-to-day life. One variation of such networks represents the Vehicular Ad Hoc Networks (VANETs, widely implemented in order to control day-to-day road traffic. The major concern of VANETs is oriented around providing security to moving vehicles that makes it possible to reduce accidents and traffic jam and moreover to establish communication among different vehicles. In this study, we analyze a number of possible attacks that may pertain to VANETs. Intrusion detection imposes various challenges to efficient implementation of VANETs. To overcome it, several intrusion detection measures have been proposed. The Watchdog technique is one of them. We detail this technique so as to make it convenient to implement it in our future investigations. 20. Enhancing On-Demand Multicast Routing Protocols using Mobility Prediction in Mobile Ad-hoc Network Nermin Makhlouf 2014-08-01 Full Text Available A Mobile Ad hoc Network (MANET is a self-organizing wireless communication network in which mobile devices are based on no infrastructure like base stations or access points. Minimal configuration and quick deployment make ad hoc networks suitable for emergency situations like disaster recovery or military conflict. Since node mobility may cause links to be broken frequently, a very important issue for routing in MANETs is how to set reliable paths which can last as long as possible. To solve this problem, non-random behaviors for the mobility patterns that mobile users exhibit are exploited. This paper introduces a scheme to improve On-Demand Multicast Routing Protocol (ODMRP performances by using mobility prediction. 1. A Distributed Virtual Backbone Formation for Wireless Ad Hoc and Sensor Networks CAO Yong-tao; HE Chen; JIANG Ling-ge 2007-01-01 The virtual backbone is an approach for solving routing problems in wireless ad hoc and sensor networks. A connected dominating set (CDS) was proposed as a virtual backbone to improve the performance of wireless networks. The quality of a virtual backbone is measured not only by approximation factor, which is the ratio of its size to that of minimum CDS, but also time complexity and message complexity. In this paper, a distributed algorithm is presented to construct a minimum CDS for ad hoc and sensor networks. By destroying triangular loops in the virtual backbone, the proposed algorithm can effectively construct a CDS with smaller size. Moreover, our algorithm, which is fully localized, has a constant approximation ratio, linear message and time complexity, and low implementation complexity. The simulation results and theoretical analysis show that our algorithm has better efficiency and performance than conventional approaches. 2. Multicasting along Energy—Efficient Meshes in Mobile Ad Hoc Networks JIANGHai; CHENGShixin; HEYongming 2003-01-01 In consideration that current mesh-based multicast routing protocols for mobile ad hoc networks don't tend to form energy-efficient multicast infrastruc-ture, we propose a new Energy-efficient multicast rout-ing protocol (E2MRP) for mobile ad hoc networks. The two main characteristics of E2MRP are: (1) using in turn the criteria for minimum energy consumed per packet and minimum maximum node cost during the course of relaying group (RG) creation and maintenance; (2)forming a graph-based multicast infrastructure instead of a tree-based one. Compared to multicast incremen-tal power (MIP) and on-demand multicast routing pro-tocol (ODMRP), as the simulation results show, E2MRP tremendously reduces the energy consumption rate of nodes and hence prolongs the lifetime of nodes and net-works, especially when the size of multicast group is small and node mobility is low. 3. Impact of Rushing attack on Multicast in Mobile Ad Hoc Network Palanisamy, V 2009-01-01 A mobile ad hoc network (MANETs) is a self-organizing system of mobile nodes that communicate with each other via wireless links with no fixed infrastructure or centralized administration such as base station or access points. Nodes in a MANETs operate both as host as well as routers to forward packets for each other in a multihop fashion. For many applications in wireless networks, multicasting is an important and frequent communication service. By multicasting, since a single message can be delivered to multiple receivers simultaneously. It greatly reduces the transmission cost when sending the same packet to multiple recipients. The security issue of MANETs in group communications is even more challenging because of involvement of multiple senders and multiple receivers. At that time of multicasting, mobile ad hoc network are unprotected by the attacks of malicious nodes because of vulnerabilities of routing protocols. Some of the attacks are Rushing attack, Blackhole attack, Sybil attack, Neighbor attack ... Esma Insaf Djebbar 2010-11-01 Full Text Available The flexibility and diversity of Wireless Mobile Networks offer many opportunities that are not alwaystaken into account by existing distributed systems. In particular, the proliferation of mobile users and theuse of mobile Ad-Hoc promote the formation of collaborative groups to share resources. We propose asolution for the management of fault tolerance in the Ad-Hoc networks, combining the functions neededto better availability of data. Our contribution takes into account the characteristics of mobile terminalsin order to reduce the consumption of resources critical that energy, and to minimize the loss ofinformation. Our solution is based on the formation of clusters, where each is managed by a node leader.This solution is mainly composed of four sub-services, namely: prediction, replication, management ofnodes in the cluster and supervision. We have shown, using several sets of simulation, that our solution istwofold: minimizing the energy consumption which increases the life of the network and better supportdeal with requests lost. 5. Neighbor Attack And Detection Mechanism In Mobile Ad-Hoc Networks S. Parthiban 2012-04-01 Full Text Available In Mobile Ad-Hoc Networks (MANETs, security is one of the most important concerns because a MANETs system is much more vulnerable to attacks than a wired or infrastructure-based wireless network. Designing an effective security protocol for MANET is a very challenging task. This is mainlydue to the unique characteristics of MANETs, namely shared broadcast radio channel, insecure operatingenvironment, lack of central authority, lack of association among users, limited availability of resources, and physical vulnerability. In this paper we present simulation based study of the impact of neighbor attack on mesh-based Mobile Ad-Hoc Network (MANET. And also we study the number of attackers and position affects the performance metrics such as packet delivery ratio and throughput. The study enables us to propose a secure neighbor detection mechanism (SNDM. A generic detection mechanism against neighbor attack for On Demand Routing Protocols is simulated on GlomoSim environment. 6. Maximization of Energy Efficiency in Wireless ad hoc and Sensor Networks With SERENA Saoucene Mahfoudh 2009-01-01 Full Text Available In wireless ad hoc and sensor networks, an analysis of the node energy consumption distribution shows that the largest part is due to the time spent in the idle state. This result is at the origin of SERENA, an algorithm to SchEdule RoutEr Nodes Activity. SERENA allows router nodes to sleep, while ensuring end-to-end communication in the wireless network. It is a localized and decentralized algorithm assigning time slots to nodes. Any node stays awake only during its slot and the slots assigned to its neighbors, it sleeps the remaining time. Simulation results show that SERENA enables us to maximize network lifetime while increasing the number of user messages delivered. SERENA is based on a two-hop coloring algorithm, whose complexity in terms of colors and rounds is evaluated. We then quantify the slot reuse. Finally, we show how SERENA improves the node energy consumption distribution and maximizes the energy efficiency of wireless ad hoc and sensor networks. We compare SERENA with classical TDMA and optimized variants such as USAP in wireless ad hoc and sensor networks. 7. An Effective Approach for Mobile ad hoc Network via I-Watchdog Protocol Nidhi Lal 2014-12-01 Full Text Available Mobile ad hoc network (MANET is now days become very famous due to their fixed infrastructure-less quality and dynamic nature. They contain a large number of nodes which are connected and communicated to each other in wireless nature. Mobile ad hoc network is a wireless technology that contains high mobility of nodes and does not depend on the background administrator for central authority, because they do not contain any infrastructure. Nodes of the MANET use radio wave for communication and having limited resources and limited computational power. The Topology of this network is changing very frequently because they are distributed in nature and self-configurable. Due to its wireless nature and lack of any central authority in the background, Mobile ad hoc networks are always vulnerable to some security issues and performance issues. The security imposes a huge impact on the performance of any network. Some of the security issues are black hole attack, flooding, wormhole attack etc. In this paper, we will discuss issues regarding low performance of Watchdog protocol used in the MANET and proposed an improved Watchdog mechanism, which is called by I-Watchdog protocol that overcomes the limitations of Watchdog protocol and gives high performance in terms of throughput, delay. 8. Fault Tolerant Mechanism for Multimedia Flows in Wireless Ad Hoc Networks Based on Fast Switching Paths Juan R. Diaz 2014-01-01 Full Text Available Multimedia traffic can be forwarded through a wireless ad hoc network using the available resources of the nodes. Several models and protocols have been designed in order to organize and arrange the nodes to improve transmissions along the network. We use a cluster-based framework, called MWAHCA architecture, which optimizes multimedia transmissions over a wireless ad hoc network. It was proposed by us in a previous research work. This architecture is focused on decreasing quality of service (QoS parameters like latency, jitter, and packet loss, but other network features were not developed, like load balance or fault tolerance. In this paper, we propose a new fault tolerance mechanism, using as a base the MWAHCA architecture, in order to recover any multimedia flow crossing the wireless ad hoc network when there is a node failure. The algorithm can run independently for each multimedia flow. The main objective is to keep the QoS parameters as low as possible. To achieve this goal, the convergence time must be controlled and reduced. This paper provides the designed protocol, the analytical model of the algorithm, and a software application developed to test its performance in a real laboratory. 9. DYNAMIC K-MEANS ALGORITHM FOR OPTIMIZED ROUTING IN MOBILE AD HOC NETWORKS Zahra Zandieh Shirazi 2016-04-01 Full Text Available In this paper, a dynamic K-means algorithm to improve the routing process in Mobile Ad-Hoc networks (MANETs is presented. Mobile ad-hoc networks are a collocation of mobile wireless nodes that can operate without using focal access points, pre-existing infrastructures, or a centralized management point. In MANETs, the quick motion of nodes modifies the topology of network. This feature of MANETS is lead to various problems in the routing process such as increase of the overhead massages and inefficient routing between nodes of network. A large variety of clustering methods have been developed for establishing an efficient routing process in MANETs. Routing is one of the crucial topics which are having significant impact on MANETs performance. The K-means algorithm is one of the effective clustering methods aimed to reduce routing difficulties related to bandwidth, throughput and power consumption. This paper proposed a new K-means clustering algorithm to find out optimal path from source node to destinations node in MANETs. The main goal of proposed approach which is called the dynamic K-means clustering methods is to solve the limitation of basic K-means method like permanent cluster head and fixed cluster members. The experimental results demonstrate that using dynamic K-means scheme enhance the performance of routing process in Mobile ad-hoc networks. 10. An Opportunistic Routing Protocol for Mobile Cognitive Radio Ad hoc networks S. Selvakanmani 2014-05-01 11. An Agent Based Intrusion Detection Model for Mobile Ad Hoc Networks B. M. Reshmi 2006-01-01 Full Text Available Intrusion detection has over the last few years, assumed paramount importance within the broad realm of network security, more so in case of wireless mobile ad hoc networks. The inherently vulnerable characteristics of wireless mobile ad hoc networks make them susceptible to attacks in-spite of some security measures, and it may be too late before any counter action can take effect. As such, there is a need to complement traditional security mechanisms with efficient intrusion detection and response systems. This paper proposes an agent-based model to address the aspect of intrusion detection in cluster based mobile wireless ad hoc network environment. The model comprises of a set of static and mobile agents, which are used to detect intrusions, respond to intrusions, and distribute selected and aggregated intrusion information to all other nodes in the network in an intelligent manner. The model is simulated to test its operation effectiveness by considering the performance parameters such as, detection rate, false positives, agent overheads, and intrusion information distribution time. Agent based approach facilitates flexible and adaptable security services. Also, it supports component based software engineering components such as maintainability, reachability, reusability, adaptability, flexibility, and customization. 12. Performance Impacts of Lower-Layer Cryptographic Methods in Mobile Wireless Ad Hoc Networks VAN LEEUWEN, BRIAN P.; TORGERSON, MARK D. 2002-10-01 In high consequence systems, all layers of the protocol stack need security features. If network and data-link layer control messages are not secured, a network may be open to adversarial manipulation. The open nature of the wireless channel makes mobile wireless mobile ad hoc networks (MANETs) especially vulnerable to control plane manipulation. The objective of this research is to investigate MANET performance issues when cryptographic processing delays are applied at the data-link layer. The results of analysis are combined with modeling and simulation experiments to show that network performance in MANETs is highly sensitive to the cryptographic overhead. 13. Implementation and performance evaluation of mobile ad hoc network for Emergency Telemedicine System in disaster areas. Kim, J C; Kim, D Y; Jung, S M; Lee, M H; Kim, K S; Lee, C K; Nah, J Y; Lee, S H; Kim, J H; Choi, W J; Yoo, S K 2009-01-01 So far we have developed Emergency Telemedicine System (ETS) which is a robust system using heterogeneous networks. In disaster areas, however, ETS cannot be used if the primary network channel is disabled due to damages on the network infrastructures. Thus we designed network management software for disaster communication network by combination of Mobile Ad hoc Network (MANET) and Wireless LAN (WLAN). This software maintains routes to a Backbone Gateway Node in dynamic network topologies. In this paper, we introduce the proposed disaster communication network with management software, and evaluate its performance using ETS between Medical Center and simulated disaster areas. We also present the results of network performance analysis which identifies the possibility of actual Telemedicine Service in disaster areas via MANET and mobile network (e.g. HSDPA, WiBro). 14. RESEARCH ON ANONYMOUS COMMUNICATION TECHNOLOGIES IN AD HOC NETWORKS%无线 Ad hoc 网络匿名通信技术研究 王秀芝; 石志东; 房卫东; 张小珑; 单联海 2016-01-01 无线 Ad hoc 网络(MANET)的多跳、自组织、无固定设施以及运算资源有限等特性,使得传统网络中复杂度高的安全算法难以应用于其中。而采用与匿名技术相结合的安全机制,可较好地解决节点隐私和通信关系保密的安全问题。针对现有的匿名技术,采用对比分析的方法,对传统网络的匿名技术进行分析,总结技术上的优缺点,研究 Ad hoc 网络的匿名技术,并对比分析各种匿名通信协议的安全性能,为后续的研究与应用提供帮助。%Due to the features of Ad hoc networks such as multi-hop,self-organisation,non-infrastructure and limited resource,the security algorithms with high complexity in traditional networks are hard to be used in it.However to use the security mechanism combining with anonymous technology can well solve the security problem in regard to nodes’privacy and communication relationship secret.In this paper,targeted at existing anonymous technologies,we analyse their use in traditional networks with the method of comparative analysis, summarise the advantages and disadvantages in terms of technology,and meanwhile study the anonymous technologies used in Ad hoc networks,as well as compare and analyse the security performances of various anonymous communication protocol,these provide the help for subsequent researches and applications. 15. A New Proposal for Route Finding in Mobile AdHoc Networks H.Vignesh Ramamoorthy 2013-06-01 Full Text Available Mobile Ad hoc Network (MANET is a kind of wireless ad-hoc network, and is a self-configuring network of mobile routers (and associated hosts connected by wireless links – the union of which forms an arbitrary topology. The routers are free to move randomly and organize themselves arbitrarily, thus the network's wireless topology may change rapidly and unpredictably. Such a network may operate in a standalone fashion, or may be connected to the larger Internet. There are various routing protocols available for MANETs. The most popular ones are DSR, AODV and DSDV. This paper examines two routing protocols for mobile ad hoc networks– the Destination Sequenced Distance Vector (DSDV and the Ad hoc On- Demand Distance Vector routing (AODV. Generally, the routing algorithms can be classified into Reactive and Proactive. A Hybrid algorithm combines the basic properties of reactive and proactive into one. The proposed approach is a novel routing pattern based on Ant Colony Optimization and Multi Agent System. This pattern integrates two different algorithms together and helps to get optimum routes for a particular radio range. The approaches used here are Ant Colony Optimization (ACO and Multi Agent System (MAS. The proposed integrated approach has a relatively short route establishment time while using a small number of control messages which makes it a scalable routing approach. The overhead of this routing approach will be inexpensive and also will enable to have an alternate route during route failure. This proposed route finding scheme in order to provide high connectivity of nodes, will minimize the route discovery latency and the end-to-end delay. 16. Signaling-Free Max-Min Airtime Fairness in IEEE 802.11 Ad Hoc Networks Youngsoo Lee 2016-01-01 Full Text Available We propose a novel media access control (MAC protocol, referred to as signaling-free max-min airtime fair (SMAF MAC, to improve fairness and channel utilization in ad hoc networks based on IEEE 802.11 wireless local area networks (WLANs. We introduce busy time ratio (BTR as a measure for max-min airtime fairness. Each node estimates its BTR and adjusts the transmission duration by means of frame aggregation and fragmentation, so that it can implicitly announce the BTR to neighbor nodes. Based on the announced BTR, each of the neighbor nodes controls its contention window. In this way, the SMAF MAC works in a distributed manner without the need to know the max-min fair share of airtime, and it does not require exchanging explicit control messages among nodes to attain fairness. Moreover, we successfully incorporate the hidden node detection and resolution mechanisms into the SMAF MAC to deal with the hidden node problem in ad hoc networks. The simulation results confirm that the SMAF MAC enhances airtime fairness without degrading channel utilization, and it effectively resolves several serious problems in ad hoc networks such as the starvation, performance anomaly, and hidden node problems. 17. Intelligent Stale-Frame Discards for Real-Time Video Streaming over Wireless Ad Hoc Networks Sheu Tsang-Ling 2009-01-01 Full Text Available Abstract This paper presents intelligent early packet discards (I-EPD for real-time video streaming over a multihop wireless ad hoc network. In a multihop wireless ad hoc network, the quality of transferring real-time video streams could be seriously degraded, since every intermediate node (IN functionally like relay device does not possess large buffer and sufficient bandwidth. Even worse, a selected relay node could leave or power off unexpectedly, which breaks the route to destination. Thus, a stale video frame is useless even if it can reach destination after network traffic becomes smooth or failed route is reconfigured. In the proposed I-EPD, an IN can intelligently determine whether a buffered video packet should be early discarded. For the purpose of validation, we implement the I-EPD on Linux-based embedded systems. Via the comparisons of performance metrics (packet/frame discards ratios, PSNR, etc., we demonstrate that video quality over a wireless ad hoc network can be substantially improved and unnecessary bandwidth wastage is greatly reduced. 18. Power Control in Reactive Routing Protocol for Mobile Ad Hoc Network Maher HENI 2012-05-01 Full Text Available The aim of this work is to change the routing strategy of AODV protocol (Ad hoc On Demand Vector inorder to improve the energy consumption in mobile ad hoc networks (MANET. The purpose is tominimize the regular period of HELLO messages generated by the AODV protocol used for the research,development and maintenance of routes. This information is useful to have an idea about battery powerlevels of different network hosts. After storing this information, the node elect the shortest path followingthe classical model used this information to elect safest path (make a compromise in terms of energy.Transmitter node does not select another node as its battery will be exhausted soon.Any node of the network can have the same information’s about the neighborhoods as well as otherinformation about the energy level of the different terminal to avoid routing using a link that will be lostdue to an exhausted battery of a node in this link.Analytical study and simulations by Jist/SWANS have been conducted to note that no divergencerelatively to the classical AODV, a node can have this type of information that improves the energyefficiency in ad hoc networks. 19. Power-Controlled MAC Protocols with Dynamic Neighbor Prediction for Ad hoc Networks LI Meng; ZHANG Lin; XIAO Yong-kang; SHAN Xiu-ming 2004-01-01 Energy and bandwidth are the scarce resources in ad hoc networks because most of the mobile nodes are battery-supplied and share the exclusive wireless medium. Integrating the power control into MAC protocol is a promising technique to fully exploit these precious resources of ad hoc wireless networks. In this paper, a new intelligent power-controlled Medium Access Control (MAC) (iMAC) protocol with dynamic neighbor prediction is proposed. Through the elaborate design of the distributed transmit-receive strategy of mobile nodes, iMAC greatly outperforms the prevailing IEEE 802.11 MAC protocols in not only energy conservation but also network throughput. Using the Dynamic Neighbor Prediction (DNP), iMAC performs well in mobile scenes. To the best of our knowledge, iMAC is the first protocol that considers the performance deterioration of power-controlled MAC protocols in mobile scenes and then proposes a solution. Simulation results indicate that DNP is important and necessary for power-controlled MAC protocols in mobile ad hoc networks. 20. Coherent Route Cache In Dynamic Source Routing For Ad Hoc Networks Sofiane Boukli Hacene 2012-02-01 Full Text Available Ad hoc network is a set of nodes that are able to move and can be connected in an arbitrary manner. Each node acts as a router and communicates using a multi-hop wireless links. Nodes within ad hoc networks need efficient dynamic routing protocols to facilitate communication. An Efficient routing protocol can provide significant benefits to mobile ad hoc networks, in terms of both performance and reliability. Several routing protocols exist allowing and facilitating communication between mobile nodes. One of the promising routing protocols is DSR (Dynamic Source Routing. This protocol presents some problems. The major problem in DSR is that the route cache contains some inconsistence routing information; this is due to node mobility. This problem generates longer delays for data packets. In order to reduce the delays we propose a technique based on cleaning route caches for nodes within an active route. Our approach has been implemented and tested in the well known network simulator GLOMOSIM and the simulation results show that protocol performance have been enhanced. 1. Optimal congestion control algorithm for ad hoc networks: Penalty function-based approach XU Wet-qiang; WU Tie-jun 2006-01-01 In this paper, based on the inherent characteristic of the contention relation between flows in ad hoc networks, we introduce the notion of the link's interference set, extend the utility maximization problem representing congestion control in wireline networks to ad hoc networks, apply the penalty function approach and the subgradient method to solve this problem, and propose the congestion control algorithm Penalty function-based Optical Congestion Control (POCC) which is implemented in NS2 simulator. Specifically, each link transmits periodically the information on its congestion state to its interference set; the session at each source adjusts the transmission rate based on the optimal tradeoffbetween the utility value and the congestion level which the interference set of the links that this session goes though suffers from. MATLAB-based simulation results showed that POCC can approach the globally optimal solution. The NS2-based simulation results showed that POCC outperforms default TCP and ATCP to achieve efficient and fair resource allocation in ad hoc networks. 2. Using Apriori algorithm to prevent black hole attack in mobile Ad hoc networks 2013-01-01 Full Text Available A mobile ad hoc network (MANET is considered as an autonomous network, which consists of mobile nodes, which communicate with each other over wireless links. When there is no fixed infrastructure, nodes have to cooperate in order to incorporate the necessary network functionality. Ad hoc on Demand Distance Vector (AODV protocol is one of the primary principal routing protocols implemented in Ad hoc networks. The security of the AODV protocol is threaded by a specific kind of attack called ‘Black Hole’ attack. This paper presents a technique to prevent the Black hole attack by implementing negotiation with neighbors who claim to maintain a route to destination. Negotiation process is strengthen by apriori method to judge about suspicious node. Apriori algorithm is an effective association rule mining method with relatively low complexity, which is proper for MANETs. To achieve more improvement, fuzzy version of ADOV is used. The simulation results indicate that the proposed protocol provides more securable routing and also more efficiency in terms of packet delivery, overhead and detection rate than the conventional AODV and fuzzy AODV in the presence of Black hole attacks. 3. Intrusion Detection System for Mobile Ad - Hoc Network Using Cluster-Based Approach Nisha Dang 2012-06-01 Full Text Available Today Mobile Ad-hoc Networks have wide spread use in normal as well as mission critical applications. Mobile ad hoc networks are more likely to be attacked due to lack of infrastructure and no central management. To secure Manets many traditional security solutions like encryption are used but not find to be promising. Intrusion detection system is one of the technologies that provide some goodsecurity solutions. IDS provide monitoring and auditing capabilities to detect any abnormality in security of the system. IDS can be used with clustering algorithms to protect entire cluster from malicious code. Existing clustering algorithms have a drawback of consuming more power and they are associated with routes. The routeestablishment and route renewal affects the clusters and asa consequence, the processing and traffic overhead increases due to instability of clusters. The ad hoc networks are battery and power constraint, and therefore IDS cannot be run on all the nodes. A trusted monitoring node can be deployed to detect and respond against intrusions in time. The proposed simplified clustering scheme has been used to detect intrusions, resulting in high detection rates and low processing and memory overhead irrespective of the routes, connections, traffic types and mobility of nodes inthe network. 4. Network Parameters Impact on Dynamic Transmission Power Control in Vehicular Ad hoc Networks 2013-09-01 Full Text Available In vehicular ad hoc networks, the dynamic change in transmission power is very effective to increase the throughput of the wireless vehicular network and decrease the delay of the message communicationbetween vehicular nodes on the highway. Whenever an event occurs on the highway, the reliability of the communication in the vehicular network becomes so vital so that event created messages shouldreach to all the moving network nodes. It becomes necessary that there should be no interference fromoutside of the network and all the neighbor nodes should lie in the transmission range of thereference vehicular node. Transmission range is directly proportional to the transmission power the moving node. If the transmission power will be high, the interference increases that can cause higherdelay in message reception at receiver end, hence the performance of the network decreased. In this paper, it is analyzed that how transmission power can be controlled by considering other differentparameter of the network such as; density, distance between moving nodes, different types of messages dissemination with their priority, selection of an antenna also affects on the transmission power. Thedynamic control of transmission power in VANET serves also for the optimization of the resources where it needs, can be decreased and increased depending on the circumstances of the network.Different applications and events of different types also cause changes in transmission power to enhance the reachability. The analysis in this paper is comprised of density, distance with single hop and multihop message broadcasting based dynamic transmission power control as well as antenna selection and applications based. Some summarized tables are produced according to the respective parameters of the vehicular network. At the end some valuable observations are made and discussed in detail. This paper concludes with a grand summary of all the protocols discussed in it. 5. Security Scheme for Distributed DoS in Mobile Ad Hoc Networks Sanyal, Sugata; Gogri, Rajat; Rathod, Punit; Dedhia, Zalak; Mody, Nirali 2010-01-01 In Mobile Ad Hoc Networks (MANET), various types of Denial of Service Attacks (DoS) are possible because of the inherent limitations of its routing protocols. Considering the Ad Hoc On Demand Vector (AODV) routing protocol as the base protocol it is possible to find a suitable solution to over-come the attack of initiating / forwarding fake Route Requests (RREQs) that lead to hogging of network resources and hence denial of service to genuine nodes. In this paper, a proactive scheme is proposed that could prevent a specific kind of DoS attack and identify the misbehaving node. Since the proposed scheme is distributed in nature it has the capability to prevent Distributed DoS (DDoS) as well. The performance of the proposed algorithm in a series of simulations reveal that the proposed scheme provides a better solution than existing approaches with no extra overhead. 6. Reliable Coverage Area Based Link Expiration Time (LET) Routing Metric for Mobile Ad Hoc Networks Ahmed, Izhar; Tepe, K. E.; Singh, B. K. This paper presents a new routing metric for mobile ad hoc networks. It considers both coverage area as well as link expiration information, which in turn requires position, speed and direction information of nodes in the network. With this new metric, a routing protocol obtains routes that last longer with as few hops as possible. The proposed routing metric is implemented with Ad Hoc On-Demand Distance Vector Routing (AODV) protocol. Thus, the performance of the proposed routing metric is tested against the minimum hop metric of AODV. Simulation results show that the AODV protocol with the new routing metric significantly improves delivery ratio and reduces routing overhead. The delay performance of AODV with the new metric is comparable to its minimum hop metric implementation. 7. Analysis and Proposal of Position-Based Routing Protocols for Vehicular Ad Hoc Networks Okada, Hiraku; Takano, Akira; Mase, Kenichi One of the most promising applications of a mobile ad hoc network is a vehicular ad hoc network (VANET). Each vehicle is aware of its position information by GPS or other methods, so position-based routing is a useful approach in VANET. The position-based routing protocol can be classified roughly into a next-hop forwarding method and a directed flooding method. We evaluate performance of both methods by analytic approach and compare them in this paper. From the evaluation results, we conclude that it is effective for the position-based routing to choose either the next-hop forwarding method or the directed flooding method according to the environment. Then we propose the hybrid transmission method which can select one of them according to the environment, and clarify that the proposed method can keep the packet delivery ratio at a high level and reduce the delay time. 8. A LOOP-BASED APPROACH IN CLUSTERING AND ROUTING IN MOBILE AD HOC NETWORKS Li Yanping; Wang Xin; Xue Xiangyang; C.K. Toh 2006-01-01 Although clustering is a convenient framework to enable traffic control and service support in Mobile Ad hoc NETworks (MANETs), it is seldom adopted in practice due to the additional traffic overhead it leads to for the resource limited ad hoc network. In order to address this problem, we proposed a loop-based approach to combine clustering and routing. By employing loop topologies, topology information is disseminated with a loop instead of a single node, which provides better robustness, and the nature of a loop that there are two paths between each pair of nodes within a loop suggests smart route recovery strategy. Our approach is composed of setup procedure, regular procedure and recovery procedure to achieve clustering, routing and emergent route recovering. 9. Performance Evaluation of Unicast and Broadcast Mobile Ad hoc Network Routing Protocols Debnath, Sumon Kumar; Islam, Nayeema 2010-01-01 Efficient routing mechanism is a challenging issue for group oriented computing in Mobile Ad Hoc Networks (MANETs). The ability of MANETs to support adequate Quality of Service (QoS) for group communication is limited by the ability of the underlying ad-hoc routing protocols to provide consistent behavior despite the dynamic properties of mobile computing devices. In MANET QoS requirements can be quantified in terms of Packet Delivery Ratio (PDR), Data Latency, Packet Loss Probability, Routing Overhead, Medium Access Control (MAC) Overhead and Data Throughput etc. This paper presents an in depth study of one to many and many to many communications in MANETs and provides a comparative performance evaluation of unicast and broadcast routing protocols. Dynamic Source Routing protocol (DSR) is used as unicast protocol and BCAST is used to represent broadcast protocol. The performance differentials are analyzed using ns2 network simulator varying multicast group size (number of data senders and data receivers). Bo... 10. Improved Packet Forwarding Approach in Vehicular Ad Hoc Networks Using RDGR Algorithm Prasanth, K; Jayasudha, K; Chandrasekar, Dr C; 10.5121/ijngn.2010.2106 2010-01-01 VANETs (Vehicular Ad hoc Networks) are highly mobile wireless ad hoc networks and will play an important role in public safety communications and commercial applications. Routing of data in VANETs is a challenging task due to rapidly changing topology and high speed mobility of vehicles. Position based routing protocols are becoming popular due to advancement and availability of GPS devices. One of the critical issues of VANETs are frequent path disruptions caused by high speed mobility of vehicle that leads to broken links which results in low throughput and high overhead . This paper argues the use of information on vehicles' movement information (e.g., position, direction, speed of vehicles) to predict a possible link-breakage event prior to its occurrence. So in this paper we propose a Reliable Directional Greedy routing (RDGR), a reliable position based routing approach which obtains position, speed and direction of its neighboring nodes from GPS. This approach incorporates potential score based strategy... 11. Adaptive and Secure Routing Protocol for Emergency Mobile Ad Hoc Networks Panaousis, Emmanouil A; Millar, Grant P; Politis, Christos; 10.5121/ijwmn.2010.2205 2010-01-01 The nature of Mobile Ad hoc NETworks (MANETs) makes them suitable to be utilized in the context of an extreme emergency for all involved rescue teams. We use the term emergency MANETs (eMANETs) in order to describe next generation IP-based networks, which are deployed in emergency cases such as forest fires and terrorist attacks. The main goal within the realm of eMANETs is to provide emergency workers with intelligent devices such as smart phones and PDAs. This technology allows communication "islets" to be established between the members of the same or different emergency teams (policemen, firemen, paramedics). In this article, we discuss an adaptive and secure routing protocol developed for the purposes of eMANETs. We evaluate the performance of the protocol by comparing it with other widely used routing protocols for MANETs. We finally show that the overhead introduced due to security considerations is affordable to support secure ad-hoc communications among lightweight devices. 12. Adaptive and Secure Routing Protocol for Emergency Mobile Ad Hoc Networks Emmanouil A. Panaousis 2010-05-01 Full Text Available The nature of Mobile Ad hoc NETworks (MANETs makes them suitable to be utilized in the context of anextreme emergency for all involved rescue teams. We use the term emergency MANETs (eMANETs inorder to describe next generation IP-based networks, which are deployed in emergency cases such asforest fires and terrorist attacks. The main goal within the realm of eMANETs is to provide emergencyworkers with intelligent devices such as smart phones and PDAs. This technology allows communication”islets” to be established between the members of the same or different emergency teams (policemen,firemen, paramedics. In this article, we discuss an adaptive and secure routing protocol developed forthe purposes of eMANETs. We evaluate the performance of the protocol by comparing it with otherwidely used routing protocols for MANETs. We finally show that the overhead introduced due to securityconsiderations is affordable to support secure ad-hoc communications among lightweight devices. 13. Authentication Using Trust to Detect Misbehaving Nodes in Mobile Ad hoc Networks Using Q-Learning S.Sivagurunathan 2016-05-01 Full Text Available Providing security in Mobile Ad Hoc Network is crucial problem due to its open shared wireless medium, multi-hop and dynamic nature, constrained resources, lack of administration and cooperation. Traditionally routing protocols are designed to cope with routing operation but in practice they may be affected by misbehaving nodes so that they try to disturb the normal routing operations by launching different attacks with the intention to minimize or collapse the overall network performance. Therefore detecting a trusted node means ensuring authentication and securing routing can be expected. In this article we have proposed a Trust and Q-learning based Security (TQS model to detect the misbehaving nodes over Ad Hoc On Demand Distance-Vector (AODV routing protocol. Here we avoid the misbehaving nodes by calculating an aggregated reward, based on the Q-learning mechanism by using their historical forwarding and responding behaviour by the way misbehaving nodes can be isolated. 14. Cooperation in Carrier Sense Based Wireless Ad Hoc Networks - Part II: Proactive Schemes Munari, Andrea; Zorzi, Michele 2012-01-01 This work is the second of a two-part series of papers on the effectiveness of cooperative techniques in non-centralized carrier sense-based ad hoc wireless networks. While Part I extensively discussed reactive cooperation, characterized by relayed transmissions triggered by failure events at the intended receiver, Part II investigates in depth proactive solutions, in which the source of a packet exploits channel state information to preemptively coordinate with relays in order to achieve the optimal overall rate to the destination. In particular, this work shows by means of both analysis and simulation that the performance of reactive cooperation is reduced by the intrinsic nature of the considered medium access policy, which biases the distribution of the available relays, locating them in unfavorable positions for rate optimization. Moreover, the highly dynamic nature of interference that characterizes non-infrastructured ad hoc networks is proved to hamper the efficacy and the reliability of preemptively ... 15. A REVIEW ON ADVANCED TRAFFIC CONTROL TECHNIQUES IN MOBILE AD-HOC NETWORK 2012-01-01 A mobile ad hoc network (MANET) is a dynamicdistributed system of wireless nodes that moveindependently of each other. The operating transmissionrange of the nodes is limited and as a result, MANETroutes are often multi-hop in nature. Any node in aMANET can become a source or destination, and eachnode can function as a router, forwarding data for its peers.MANET routing protocols are either proactive or reactivein nature. Proactive routing protocols determine andmaintain routes between any pa... 16. A Novel Multi-Level Trust Algorithm for Mobile Ad Hoc Networks YU Genjian; ZHENG Baoyu 2006-01-01 Firstly, a multilevel trust algorithm for MANET(mobile ad hoc networks) is presented in this paper and the trust level is defined as a three-tuple type in this multilevel trust algorithm. The paper introduces the multilevel trust into MANET, thereby controlling restricted classified information flows among nodes that have different trust levels. Secondly, the infrastructure of MANET that suit to our multi-level trust is presented. Some conclusions are given at lastly. 17. Intelligent Security Auditing Based on Access Control of Devices in Ad Hoc Network XU Guang-wei; SHI You-qun; ZHU Ming; WU Guo-wen; CAO Qi-ying 2006-01-01 Security in Ad Hoc network is an important issue under the opening circumstance of application service. Some protocols and models of security auditing have been proposed to ensure rationality of contracting strategy and operating regulation and used to identify abnormal operation. Model of security auditing based on access control of devices will be advanced to register sign of devices and property of event of access control and to audit those actions. In the end, the model is analyzed and simulated. 18. Effects of Data Replication on Data Exfiltration in Mobile Ad Hoc Networks Utilizing Reactive Protocols 2015-03-01 reverse path formation in AODV . . . . . . . . . . . . . . . . . . . . . . 15 4 Channel selection in CA-AODV...Routing Algorithm ACO Ant Colony Optimization AODV Ad hoc On-demand Distance Vector CA-AODV Channel Assignment AODV CAN Content Addressable Network CDS...of MANETs. Most prominently has been the standard IEEE 802.11 protocols ( Wifi ), specifically the a, b, g, and n variations. The 802.11 specification 19. A Survey on Intrusion in Ad Hoc Networks and its Detection Measures Ms. Preetee K. Karmore, 2009-05-01 Full Text Available Ad hoc wireless networks are defined as the category of wireless networks that utilizes multi-hop radio relaying and are capable of operating without the support of any fixed infrastructure hence, they are called infrastructure less networks. The lack of any central coordination makes them more vulnerable to attacks than wirednetworks. Due to some unique characteristics of MANETs, prevention methods alone are not sufficient to make them secure therefore, detection should be added as another defense before an attacker can breach the system. Network intrusion detection is the process of monitoring the events occurring in the network and analyzing them for signs of intrusions, defined as attempts to compromise the confidentiality. In this paper, we define and discuss varioustechniques of Intrusion Detection. We also present a description of routing protocols and types of security attacks possible in the network. 20. A concurrent access MAC protocol for cognitive radio ad hoc networks without common control channel Timalsina, Sunil K.; Moh, Sangman; Chung, Ilyong; Kang, Moonsoo 2013-12-01 Cognitive radio ad hoc networks (CRAHNs) consist of autonomous nodes that operate in ad hoc mode and aim at efficient utilization of spectrum resources. Usually, the cognitive nodes in a CRAHN exploit a number of available channels, but these channels are not necessarily common to all nodes. Such a network environment poses the problem of establishing a common control channel (CCC) as there might be no channel common to all the network members at all. In designing protocols, therefore, it is highly desirable to consider the network environment with no CCC. In this article, we propose a MAC protocol called concurrent access MAC (CA-MAC) that operates in the network environment with no CCC. The two devices in a communication pair can communicate with each other even if they have only one common channel available. Therefore, the problems with CCC (such as channel saturation and denial of service attacks) can also be resolved. In CA-MAC, channel accesses are distributed over communication pairs, resulting in increased network connectivity. In addition, CA-MAC allows different communication pairs to access multiple channels concurrently. According to our performance study, CA-MAC provides higher network connectivity with shorter channel access delay compared to SYN-MAC, which is the conventional key MAC protocol for the network environment with no CCC, resulting in better network throughput. 1. Provisioning QoS Guarantee by Multipath Routing and Reservation in Ad Hoc Networks Yan-Tai Shu; Guang-Hong Wang; Lei Wang; Oliver W. W. Yang; Yong-Jie Fan 2004-01-01 In this paper, a QoS multipath source routing protocol (QoS-MSR) is proposed for ad hoc networks.It can collect QoS information through route discovery mechanism of multipath source routing (MSR) and establish QoS route with reserved bandwidth. In order to reserve bandwidth efficiently, a bandwidth reservation approach called the multipath bandwidth splitting reservation (MBSR) is presented, under which the overall bandwidth request is split into several smaller bandwidth requests among multiple paths. In simulations, the anthors introduce Insignia, an in-bind signaling system that supports QoS in ad hoc networks, and extend it to multipath Insignia (M-Insignia) with QoS-MSR and MBSR. The results show that QoS-MSR routing protocol with the MBSR algorithm can improve the call admission ratio of QoS traffic, the packet delivery ratio, and the end-to-end delay of both best-effort traffic and QoS traffic. Therefore, QoS-MSR with MBSR is an efficient mechanism that supports QoS for ad hoc networks. 2. Analisis Performansi Routing Protocol OLSR Dan AOMDV Pada Vehicular Ad Hoc Network (VANET Rianda Anisia 2016-09-01 Full Text Available Vehicular Ad-Hoc Network (VANET is a development of the Mobile Ad-Hoc Network (MANET, which makes the vehicle as its nodes. VANET technology is expected to improve the security of drivers while driving on a highway between the others, with the map location, traffic information, warning if there will be a collision, and internet access in the vehicle. However, VANET has the characteristics of a network rapidly changing due to the rapid movement of nodes that need to have a routing protocol that is considered suitable and efficient so that data transmission can be optimally lasts. This research will be simulated and analyzed the comparative performance of Optimized Link State Routing Protocol (OLSR and Ad Hoc On-demand Multipath Distance Vector (AOMDV using urban conditions (urban. The environment will be tested in speed changes and the effect of the number of nodes nodes. This simulation was done using NS-equipped with SUMO 0.12.3 2:34. as mobility MOVE as a script generator and generator Performance was measured using parameters such as Average throughput comparison, Packet Delivery Ratio, Average End-to-end delay, Normalized Routing Load, and Routing Overhead. Results of analysis in environmental VANET, routing protocols AOMDV superior routing protocol than OLSR. Because almost all parameters tested in scenarios of changes in the number of nodes and node speed AOMDV have better performance so AOMDV more efficient use on urban environmental conditions. 3. Self-Organization in Mobile Ad-Hoc Networks: the Approach of Terminodes Blazevic, Ljubica; Buttyan, Levente; Capkun, Srdjan; Giordano, Silvia; Hubaux, Jean-Pierre; Le Boudec, Jean-Yves 2001-01-01 The Terminodes project is designing a wide area, mobile ad-hoc network, which is meant to be used in a public environment, in our approach, the network is run by users themselves. We give a global description of the building blocks used by the basic operation of the network, they all rely on various concepts of self-organization. Routing uses a combination of geography-based information and local, MANET-like protocols. Terminode positioning is obtained either by GPS, or by a relative positio... 4. Hardware in Loop Simulation for Emergency Communication Mobile Ad Hoc Network YANG Jie; AN Jian-ping; LIU Heng 2007-01-01 For the research of mobile Ad hoc network (MANET), hardware in the loop simulation (HILS) is introduced to improve simulation fidelity. The architectures and frameworks of HILS system are discussed. Based on HILS and QualNet network simulator, two kinds of simulation frameworks for MANET multicast emergency communicati on network are proposed. By running simulation under this configuration and doing experiments with on-demand multicast routing protocol (ODMRP), unicast and multicast functions of this protocol are tested. Research results indicate that HILS method can effectively reduce the difficulty of system modeling and improve precision of simulation, and can further accelerate transition from design to system deployment. 5. A TDMA based media access control protocol for wireless ad hoc networks Yang, Qi; Tang, Biyu 2013-03-01 This paper presents a novel Time Division Multiplex Access (TDMA) based Media Access Control (MAC) protocol of wireless Ad Hoc network. To achieve collision free transmission, time slots in a MAC frame are cataloged into three types, that is access slot, control slot and traffic slot. Nodes in the network access to the network in the access slot, and an exclusive control is allocated subsequently. Data packets are transmission by dynamic schedule the traffic slots. Throughput and transmission delay are also analyzed by simulation experiment. The proposed protocol is capable of providing collision free transmission and achieves high throughput. 6. An Assessment of Worm Hole attack over Mobile Ad-Hoc Network as serious threats 2013-07-01 Full Text Available Now these day Mobile Ad hoc networks vulnerable from number of security threats like black hole attack, DOS attack, Byzantine attack and wormhole attack. Wormhole attack is one of most important attack and having great attention in recent year. Wormhole attack, demonstrate a illusion over the network that show two far away node to be an neighbor node and attracted all traffic by presenting an greediness of shortest path over the network. This paper presents a bird eye over different existing wormhole deduction mechanism and their problem. 7. An Overview of Mobile Ad Hoc Networks for the Existing Protocols and Applications Al-Omari, Saleh Ali K; 10.5121/jgraphhoc.2010.2107 2010-01-01 Mobile Ad Hoc Network (MANET) is a collection of two or more devices or nodes or terminals with wireless communications and networking capability that communicate with each other without the aid of any centralized administrator also the wireless nodes that can dynamically form a network to exchange information without using any existing fixed network infrastructure. And it's an autonomous system in which mobile hosts connected by wireless links are free to be dynamically and some time act as routers at the same time, and we discuss in this paper the distinct characteristics of traditional wired networks, including network configuration may change at any time, there is no direction or limit the movement and so on, and thus needed a new optional path Agreement (Routing Protocol) to identify nodes for these actions communicate with each other path, An ideal choice way the agreement should not only be able to find the right path, and the Ad Hoc Network must be able to adapt to changing network of this type at any... 8. Distinguishing congestion from malicious behavior in mobile ad-hoc networks Ding, Jin; Medidi, Sirisha R. 2004-08-01 Packet dropping in Mobile Ad-hoc Networks could be a result of wireless link errors, congestion, or malicious packet drop attack. Current techniques for detecting malicious behavior either do not consider congestion in the network or are not able to detect in real time. Further more, they usually work at network layer. In this paper, we propose a TCP-Manet protocol, which reacts to congestion like TCP Reno protocol, and has additional capability to distinguish among congestion, wireless link error, and malicious packet drop attack. It is an end-to-end mechanism that does not require additional modifications to the nodes in the network. Since it is an extension of existing TCP protocol, it is compatible with existing protocols. It works in conjunction with the network layer and an unobtrusive monitor to assist the network in the detection and characterization of the nature of the behavior. Experimental results show that TCP-Manet has the same performance as that of TCP-Reno in wired network, and performs better in wireless ad-hoc networks in terms of throughput while having good detection effectiveness. 9. Dynamic autonomous routing technology for IP-based satellite ad hoc networks Wang, Xiaofei; Deng, Jing; Kostas, Theresa; Rajappan, Gowri 2014-06-01 IP-based routing for military LEO/MEO satellite ad hoc networks is very challenging due to network and traffic heterogeneity, network topology and traffic dynamics. In this paper, we describe a traffic priority-aware routing scheme for such networks, namely Dynamic Autonomous Routing Technology (DART) for satellite ad hoc networks. DART has a cross-layer design, and conducts routing and resource reservation concurrently for optimal performance in the fluid but predictable satellite ad hoc networks. DART ensures end-to-end data delivery with QoS assurances by only choosing routing paths that have sufficient resources, supporting different packet priority levels. In order to do so, DART incorporates several resource management and innovative routing mechanisms, which dynamically adapt to best fit the prevailing conditions. In particular, DART integrates a resource reservation mechanism to reserve network bandwidth resources; a proactive routing mechanism to set up non-overlapping spanning trees to segregate high priority traffic flows from lower priority flows so that the high priority flows do not face contention from low priority flows; a reactive routing mechanism to arbitrate resources between various traffic priorities when needed; a predictive routing mechanism to set up routes for scheduled missions and for anticipated topology changes for QoS assurance. We present simulation results showing the performance of DART. We have conducted these simulations using the Iridium constellation and trajectories as well as realistic military communications scenarios. The simulation results demonstrate DART's ability to discriminate between high-priority and low-priority traffic flows and ensure disparate QoS requirements of these traffic flows. 10. A Light-Weight Service Discovery Protocol for Ad Hoc Networks Ranwa A. Mallah 2009-01-01 Full Text Available Problem statement: In mobile ad hoc networks devices do not rely on a fixed infrastructure and thus have to be self-organizing. This gives rise to various challenges to network applications. Existing service discovery protocols fall short of accommodating the complexities of the ad-hoc environment. However, the performance of distributed service discovery architectures that rely on a virtual backbone for locating and registering available services appeared very promising in terms of average delay but in terms of message overhead, are the most heavy-weight. In this research we propose a very light-weight, robust and reliable model for service discovery in wireless and mobile networks by taking into account the limited resources to which are subjected the mobile units. Approach: Three processes are involved in service discovery protocols using virtual dynamic backbone for mobile ad hoc networks: registration, discovery and consistency maintenance. More specifically, the model analytically and realistically differentiates stable from unstable nodes in the network in order to form a subset of nodes constituting a relatively stable virtual Backbone (BB. Results: Overall, results acquired were very satisfactory and meet the performance objectives of effectiveness especially in terms of network load. A notable reduction of almost 80% of message signaling was observed in the network. This criterion distinguishes our proposal and corroborate to its light-weight characteristic. On the other hand, results showed reasonable mean time delay to the requests initiated by the clients. Conclusion: Extensive simulation results obtained confirm the efficiency and the light-weight characteristic of our approach in significantly reducing the cost of message overhead in addition to having the best delay values when compared with strategies well-known in the literature. 11. Highway Mobility and Vehicular Ad-Hoc Networks in NS-3 2010-01-01 The study of vehicular ad-hoc networks (VANETs) requires efficient and accurate simulation tools. As the mobility of vehicles and driver behavior can be affected by network messages, these tools must include a vehicle mobility model integrated with a quality network simulator. We present the first implementation of a well-known vehicle mobility model to ns-3, the next generation of the popular ns-2 networking simulator. Vehicle mobility and network communication are integrated through events. User-created event handlers can send network messages or alter vehicle mobility each time a network message is received and each time vehicle mobility is updated by the model. To aid in creating simulations, we have implemented a straight highway model that manages vehicle mobility, while allowing for various user customizations. We show that the results of our implementation of the mobility model matches that of the model's author and provide an example of using our implementation in ns-3. 12. Optimal Power Control for Concurrent Transmissions of Location-aware Mobile Cognitive Radio Ad Hoc Networks Song, Yi 2011-01-01 In a cognitive radio (CR) network, CR users intend to operate over the same spectrum band licensed to legacy networks. A tradeoff exists between protecting the communications in legacy networks and maximizing the throughput of CR transmissions, especially when CR links are unstable due to the mobility of CR users. Because of the non-zero probability of false detection and implementation complexity of spectrum sensing, in this paper, we investigate a sensing-free spectrum sharing scenario for mobile CR ad hoc networks to improve the frequency reuse by incorporating the location awareness capability in CR networks. We propose an optimal power control algorithm for the CR transmitter to maximize the concurrent transmission region of CR users especially in mobile scenarios. Under the proposed power control algorithm, the mobile CR network achieves maximized throughput without causing harmful interference to primary users in the legacy network. Simulation results show that the proposed optimal power control algori... 13. On Protocols to Prevent Black Hole Attacks in Mobile Ad Hoc Networks Umesh Kumar Singh 2015-01-01 Full Text Available Wireless or Mobile Networks emerged to replace the wired networks. The new generation of wireless network is relatively different than the comparisons of traditional wired network in many aspects like resource sharing, power usage, reliability, efficient, ease to handle, network infrastructure and routing protocols, etc. Mobile Ad-Hoc Networks (MANETs are autonomous and decentralized wireless systems. MANETs consist of mobile nodes that are free in moving in and out in the network. There is an increasing threat of attacks on the MANET. Thus, in MANET black hole attack are mostly serious security attacks. In this paper, we have examined certain black hole attacks prevention routing protocols. Finally, we have compared some routing protocols using some important parameters and then addressed major issues related to this. 14. An Investigation about Performance Comparison of Multi-Hop Wireless Ad-Hoc Network Routing Protocols in MANET S. Karthik 2010-05-01 Full Text Available Mobile Ad-Hoc Network (MANET is a collection of wireless mobile hosts forming a temporary network without the aid of any stand-alone infrastructure or centralized administration. Mobile Ad-hoc networks are self-organizing and self-configuring multihop wireless networks where, the structure of the network changes dynamically. This is mainly due to the mobility of nodes. The Nodes in the network not only acts as hosts but also as routers that route data to or from other nodes in network. In mobile ad-hoc networks a routing procedure is always needed to find a path so as to forward the packets appropriately between the source and the destination. The main aim of any ad-hoc network routing protocol is to meet the challenges of the dynamically changing topology and establish a correct and an efficient communication path between any two nodes with minimum routing overhead and bandwidth consumption. The design problem of such a routing protocol is not simple since an ad hoc environment introduces new challenges that are not present in fixed networks. A number of routing protocols have been proposed for this purpose like Ad Hoc On Demand Distance Vector (AODV, Dynamic Source Routing (DSR, Destination- Sequenced Distance Vector (DSDV. In this paper, we study and compare the performance of the following three routing protocols AODV, DSR and DSDV. 15. Intelligent Networks Data Fusion Web-based Services for Ad-hoc Integrated WSNs-RFID Falah Alshahrany 2016-01-01 Full Text Available The use of variety of data fusion tools and techniques for big data processing poses the problem of the data and information integration called data fusion having objectives which can differ from one application to another. The design of network data fusion systems aimed at meeting these objectives, need to take into account of the necessary synergy that can result from distributed data processing within the data networks and data centres, involving increased computation and communication. This papers reports on how this processing distribution is functionally structured as configurable integrated web-based support services, in the context of an ad-hoc wireless sensor network used for sensing and tracking, in the context of distributed detection based on complete observations to support real rime decision making. The interrelated functional and hardware RFID-WSN integration is an essential aspect of the data fusion framework that focuses on multi-sensor collaboration as an innovative approach to extend the heterogeneity of the devices and sensor nodes of ad-hoc networks generating a huge amount of heterogeneous soft and hard raw data. The deployment and configuration of these networks require data fusion processing that includes network and service management and enhances the performance and reliability of networks data fusion support systems providing intelligent capabilities for real-time control access and fire detection. 16. On capacity of wireless ad hoc networks with MIMO MMSE receivers Ma, Jing 2008-01-01 17. Performance Comparison of Secure Routing Protocols in Mobile Ad-Hoc Networks Ashwani Garg 2012-08-01 18. Analyzing Video Streaming Quality by Using Various Error Correction Methods on Mobile Ad hoc Networks in NS2 Norrozila Sulaiman 2014-10-01 Full Text Available Transmission video over ad hoc networks has become one of the most important and interesting subjects of study for researchers and programmers because of the strong relationship between video applications and frequent users of various mobile devices, such as laptops, PDAs, and mobile phones in all aspects of life. However, many challenges, such as packet loss, congestion (i.e., impairments at the network layer, multipath fading (i.e., impairments at the physical layer [1], and link failure, exist in transferring video over ad hoc networks; these challenges negatively affect the quality of the perceived video [2].This study has investigated video transfer over ad hoc networks. The main challenges of transferring video over ad hoc networks as well as types of errors that may occur during video transmission, various types of video mechanisms, error correction methods, and different Quality of Service (QoS parameters that affect the quality of the received video are also investigated. Huang Chenn-Jung 2008-01-01 Full Text Available Abstract With the growth up of internet in mobile commerce, researchers have reproduced various mobile applications that vary from entertainment and commercial services to diagnostic and safety tools. Mobility management has widely been recognized as one of the most challenging problems for seamless access to wireless networks. In this paper, a novel link enhancement mechanism is proposed to deal with mobility management problem in vehicular ad hoc networks. Two machine learning techniques, namely, particle swarm optimization and fuzzy logic systems, are incorporated into the proposed schemes to enhance the accuracy of prediction of link break and congestion occurrence. The experimental results verify the effectiveness and feasibility of the proposed schemes. 20. A Survey of Unipath Routing Protocols for Mobile Ad Hoc Networks M.Barveen Banu 2013-12-01 Full Text Available A MANET (Mobile Ad hoc NETwork is an interconnection of mobile devices by wireless links forming a dynamic topology without much physical network infrastructure such as routers, servers, access points/cables or centralized administration. Routing is a mechanism of exchanging data between the source node and the destination node. Several protocols are used to perform routing the information from the source node to the destination node. The main aim of this paper is to explore the working principles of each unipath routing protocol. The unipath routing protocols are divided into Table-Driven (Proactive, On-demand (Reactive, Hybrid routing protocols. 1. A survey of medium access control protocols for wireless ad hoc networks Elvio João Leonardo 2004-01-01 Full Text Available A number of issues distinguishes Medium Access Control (MAC protocols for wireless networks from those used in wireline systems. In addition, for ad-hoc networks, the characteristics of the radio channel, the diverse physical-layer technologies available and the range of services envisioned make it a difficult task to design an algorithm to discipline the access to the shared medium that results efficient, fair, power consumption sensitive and delay bound. This article presents the current “state-of-art” in this area, including solutions already commercially available as well as those still in study. 2. An SPN analysis method for parallel scheduling in Ad Hoc networks 盛琳阳; 徐文超; 贾世楼 2004-01-01 In this paper, a new analytic method for modeling and evaluating mobile ad hoc networks (MANET)is proposed. Petri nets technique is introduced into MANET and a packet-flow parallel scheduling scheme is presented using Stochastic Petri Nets (SPN). The flowing of tokens is used in graphics mode to characterize dynamical features of sharing a single wireless channel. Through SPN reachability analysis and isomorphic continuous time Markov process equations, some network parameters, such as channel efficiency, one-hop transmission delay etc. , can be obtained. Compared with conventional performance evaluation methods, the above parameters are mathematical expressions instead of test results from a simulator. 3. AN ENHANCEMENT SCHEME OF TCP PROTOCOL IN MOBILE AD HOC NETWORKS: MME-TCP Kai Caihong; Yu Nenghai; Chen Yuzhong 2007-01-01 Transmission Control Protocol (TCP) optimization in Mobile Ad hoc NETworks (MANETs) is a challenging issue because of some unique characteristics of MANETs. In this paper,a new end-to-end mechanism based on multiple metrics measurement is proposed to improve TCP performance in MANETs. Multi-metric Measurement based Enhancement of TCP (MME-TCP)designs the metrics and the identification algorithm according to the characteristics of MANETs and the experiment results. Furthermore, these metrics are measured at the sender node to reduce the overhead of control information over networks. Simulation results show that MME-TCP mechanism achieves a significant performance improvement over standard TCP in MANETs. 4. QDSR: QoS-aware Dynamic Source Routing Protocol for Mobile Ad Hoc Network SHIMinghong; YAOYinxiong; BAIYingcai 2004-01-01 QoS routing in wireless ad hoc networks faces many technical challenges due to time varying link and random mobility of nodes in these dynamic networks.In this paper, we design a QoS-aware dynamic source routing protocol (QDSR), based on DSR . QDSR uses minimum cost as the constraint, modifies route discovery, route reply and route maintenance mechanisms in DSR, adds the capability of path testing and initial resource reservation.The results of robustness and stability and performances imulations demonstrate that it suits the fluctuation of dynamic environment very well. 5. An Adaptive Scheme for Neighbor Discovery in Mobile Ad Hoc Networks 2007-01-01 The neighbor knowledge in mobile ad hoc networks is important information. However, the accuracy of neighbor knowledge is paid in terms of energy consumption. In traditional schemes for neighbor discovery, a mobile node uses fixed period to send HELLO messages to notify its existence. An adaptive scheme was proposed.The objective is that when mobile nodes are distributed sparsely or move slowly, fewer HELLO messages are needed to achieve reasonable accuracy, while in a mutable network where nodes are dense or move quickly, they can adaptively send more HELLO messages to ensure the accuracy. Simulation results show that the adaptive scheme achieves the objective and performs effectively. 6. Outage Analysis of Opportunistic Cooperative Ad Hoc Networks with Randomly Located Nodes Cheng-Wen Xing; Hai-Chuan Ding; Guang-Hua Yang; Shao-Dan Ma; Ze-Song Fei 2013-01-01 In this paper,an opportunistic cooperative ad hoc sensor network with randomly located nodes is analyzed.The randomness of nodes' locations is captured by a homogeneous Poisson point process.The effect of imperfect interference cancellation is also taken into account in the analysis.Based on the theory of stochastic geometry,outage probability and cooperative gain are derived.It is demonstrated that explicit performance gain can be achieved through cooperation.The analyses are corroborated by extensive simulation results and the analytical results can thus serve as a guideline for wireless sensor network design. Kai-Wen Hu 2008-04-01 Full Text Available With the growth up of internet in mobile commerce, researchers have reproduced various mobile applications that vary from entertainment and commercial services to diagnostic and safety tools. Mobility management has widely been recognized as one of the most challenging problems for seamless access to wireless networks. In this paper, a novel link enhancement mechanism is proposed to deal with mobility management problem in vehicular ad hoc networks. Two machine learning techniques, namely, particle swarm optimization and fuzzy logic systems, are incorporated into the proposed schemes to enhance the accuracy of prediction of link break and congestion occurrence. The experimental results verify the effectiveness and feasibility of the proposed schemes. 8. Secure neighborhood discovery: A fundamental element for mobile ad hoc networking Papadimitratos, P.; Poturalski, M.; Schaller, P. 2008-01-01 ) - the discovery of devices directly reachable for communication or in physical proximity - becomes a fundamental requirement and building block for various applications. However, the very nature of wireless mobile networks makes it easy to abuse ND and thereby compromise the overlying protocols and applications......Pervasive computing systems will likely be deployed in the near future, with the proliferation of wireless devices and the emergence of ad hoc networking as key enablers. Coping with mobility and the volatility of wireless communications in such systems is critical. Neighborhood discovery (ND... 9. A Layered Zone Routing Algorithm in Ad Hoc Network Based on Matrix of Adjacency Connection XU Guang-wei; LI Feng; SHI Xiu-jin; HUO Jia-zhen 2007-01-01 The hybrid routing protocol has received more attention recently than the proactive and the reactive, especially for large-scale and highly dynamic connection, in mobile ad hoc network. A crucial reason is that zone-layered is being utilized in the complex systems. A hybrid routing algorithm which is layered zone based on adjacency connection(LZBAC) is put forward under the background of a few members in network with steady position and link. The algorithm modifies storage structure of nodes and improves routing mechanism. The theoretical analysis and simulation testing testify that the algorithm costs shorter time of route finding and less delay than others. 10. AN MAC PROTOCOL SUPPORTING MULTIPLE TRAFFIC OVER MOBILE AD HOC NETWORKS TianHui; LiYingyang 2003-01-01 This letter presents the design and performance of a multi-channel MAC protocol that supports multiple traffics for IEEE 802.11 mobile ad-hoc networks.The dynamic channel selection scheme by receiver decision is implemented and the number of the data channel is independent of the network topology.The priority for real-time traffic is assured by the proposed adaptive back off algorithm and different IFS.The protocol is evaluated by simulation and the results have shown that it can support multiple traffics and the performance is better than the performance that IEEE 802.11 standard provides. 11. AN MAC PROTOCOL SUPPORTING MULTIPLE TRAFFIC OVER MOBILE AD HOC NETWORKS Tian Hui; Li Yingyang; Hu Jiandong; Zhang Ping 2003-01-01 This letter presents the design and performance of a multi-channel MAC protocol that supports multiple traffics for IEEE 802.11 mobile ad-hoc networks. The dynamic channel selection scheme by receiver decision is implemented and the number of the data channel is independent of the network topology. The priority for real-time traffic is assured by the proposed adaptive back off algorithm and different IFS. The protocol is evaluated by simulation and the results have shown that it can support multiple traffics and the performance is better than the performance that IEEE 802.11 standard provides. 2007-01-01 In wireless Ad-hoc networks, where mobile hosts are powered by batteries, the entire network may be partitioned because of the drainage of a small set of batteries.Therefore, the crucial issue is to improve the energy efficiency, with an objective of balancing energy consumption.A greedy algorithm called weighted minimum spanning tree (WMST) has been proposed, in which time complexity is O(n2).This algorithm takes into account the initial energy of each node and energy consumption of each communication.Simulation has demonstrated that the performance of the proposed algorithm improves the load balance and prolongs the lifetime. 13. A QoS Aware Service Composition Protocol in Mobile Ad Hoc Networks HAN Song-qiao; ZHANG Shen-sheng; ZHANG Yong; CAO Jian 2008-01-01 A novel decentralized service composition protocol was presented based on quality of service (QoS) for mobile ad hoc networks (MANETs). A service composition in MANETs is considered as a service path discovery in a service network. Based on the concept of source routing, the protocol integrates route discovery, service discovery and service composition and utilizes a constrained flooding approach to discover the optimal service path. A service path maintenance mechanism was exploited to recover broken service paths. Simulation experiments demonstrate that the proposed protocol outperforms existing service composition protocols. 14. Intelligent QoS routing algorithm based on improved AODV protocol for Ad Hoc networks Huibin, Liu; Jun, Zhang 2016-04-01 Mobile Ad Hoc Networks were playing an increasingly important part in disaster reliefs, military battlefields and scientific explorations. However, networks routing difficulties are more and more outstanding due to inherent structures. This paper proposed an improved cuckoo searching-based Ad hoc On-Demand Distance Vector Routing protocol (CSAODV). It elaborately designs the calculation methods of optimal routing algorithm used by protocol and transmission mechanism of communication-package. In calculation of optimal routing algorithm by CS Algorithm, by increasing QoS constraint, the found optimal routing algorithm can conform to the requirements of specified bandwidth and time delay, and a certain balance can be obtained among computation spending, bandwidth and time delay. Take advantage of NS2 simulation software to take performance test on protocol in three circumstances and validate the feasibility and validity of CSAODV protocol. In results, CSAODV routing protocol is more adapt to the change of network topological structure than AODV protocol, which improves package delivery fraction of protocol effectively, reduce the transmission time delay of network, reduce the extra burden to network brought by controlling information, and improve the routing efficiency of network. 15. A Secure 3-Way Routing Protocols for Intermittently Connected Mobile Ad Hoc Networks Ramesh Sekaran 2014-01-01 Full Text Available The mobile ad hoc network may be partially connected or it may be disconnected in nature and these forms of networks are termed intermittently connected mobile ad hoc network (ICMANET. The routing in such disconnected network is commonly an arduous task. Many routing protocols have been proposed for routing in ICMANET since decades. The routing techniques in existence for ICMANET are, namely, flooding, epidemic, probabilistic, copy case, spray and wait, and so forth. These techniques achieve an effective routing with minimum latency, higher delivery ratio, lesser overhead, and so forth. Though these techniques generate effective results, in this paper, we propose novel routing algorithms grounded on agent and cryptographic techniques, namely, location dissemination service (LoDiS routing with agent AES, A-LoDiS with agent AES routing, and B-LoDiS with agent AES routing, ensuring optimal results with respect to various network routing parameters. The algorithm along with efficient routing ensures higher degree of security. The security level is cited testing with respect to possibility of malicious nodes into the network. This paper also aids, with the comparative results of proposed algorithms, for secure routing in ICMANET. 16. A secure 3-way routing protocols for intermittently connected mobile ad hoc networks. Sekaran, Ramesh; Parasuraman, Ganesh Kumar 2014-01-01 The mobile ad hoc network may be partially connected or it may be disconnected in nature and these forms of networks are termed intermittently connected mobile ad hoc network (ICMANET). The routing in such disconnected network is commonly an arduous task. Many routing protocols have been proposed for routing in ICMANET since decades. The routing techniques in existence for ICMANET are, namely, flooding, epidemic, probabilistic, copy case, spray and wait, and so forth. These techniques achieve an effective routing with minimum latency, higher delivery ratio, lesser overhead, and so forth. Though these techniques generate effective results, in this paper, we propose novel routing algorithms grounded on agent and cryptographic techniques, namely, location dissemination service (LoDiS) routing with agent AES, A-LoDiS with agent AES routing, and B-LoDiS with agent AES routing, ensuring optimal results with respect to various network routing parameters. The algorithm along with efficient routing ensures higher degree of security. The security level is cited testing with respect to possibility of malicious nodes into the network. This paper also aids, with the comparative results of proposed algorithms, for secure routing in ICMANET. 17. Mitigating Malicious Attacks Using Trust Based Secure-BEFORE Routing Strategy in Mobile Ad Hoc Networks Rutuja Shah 2016-09-01 Full Text Available Mobile ad hoc Networks (MANET, being infrastructureless and dynamic in nature, are predominantly susceptible to attacks such as black hole, worm hole, cunning gray hole attack at source or destination. Various solutions have been put forth so far in literature in order to mitigate the effects of these attacks on the network performance and to improve the reliability of the network. However, these attacks are still prominently a serious threat in MANET. Hence, a trust based routing strategy termed Secure-BEFORE routing (Best FOrwarding Route Estimation is proposed to ensure optimal route estimation in computing the trust value and hop counts using the dummy packets inside the network at 1-hop level. It is observed that the overall performance of the network is improved in providing one-hop level security by maintaining the packet equivalence ratio. Malicious and suspicious nodes are isolated and eliminated from the network, based on their behavior. 18. Key Management and Authentication in Ad Hoc Network based on Mobile Agent Yi Zhang 2009-08-01 Full Text Available Key management and authentication is important to security of Mobile Ad Hoc network (MANET. Based on the (t, n threshold cryptography, this paper introduced mobile agents to exchange private key and network topological information with nodes in the network. This method avoids a centralized certification authority to distribute the public keys and the certificates, thus enhances security. Carrying private key and some state variables, mobile agents navigate in the network according to visitsbalance policy, namely, node with the least visits would be first visited by mobile agent. Any t nodes in the network can cooperate to perform an authentication upon a new node wanting to join the network. Experimental results show that the mobile agent performs very well for improving the success ratio of authentication and enhance security while reducing the communication overhead and resource consumption. 19. A Cluster Maintenance Algorithm Based on Relative Mobility for Mobile Ad Hoc Network Management SHENZhong; CHANGYilin; ZHANGXin 2005-01-01 The dynamic topology of mobile ad hoc networks makes network management significantly more challenging than wireline networks. The traditional Client/Server (Manager/Agent) management paradigm could not work well in such a dynamic environment, while the hierarchical network management architecture based on clustering is more feasible. Although the movement of nodes makes the cluster structure changeable and introduces new challenges for network management, the mobility is a relative concept. A node with high relative mobility is more prone to unstable behavior than a node with less relative mobility, thus the relative mobility of a node can be used to predict future node behavior. This paper presents the cluster availability which provides a quantitative measurement of cluster stability. Furthermore, a cluster maintenance algorithm based on cluster availability is proposed. The simulation results show that, compared to the Minimum ID clustering algorithm, our algorithm successfully alleviates the influence caused by node mobility and make the network management more efficient. 20. Integration of Body Sensor Networks and Vehicular Ad-hoc Networks for Traffic Safety.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4575211703777313, "perplexity": 1687.8013524308092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646189.21/warc/CC-MAIN-20180319003616-20180319023616-00251.warc.gz"}
https://mathoverflow.net/questions/266365/git-quotients-for-linear-representations-of-sl2-mathbb-c
# GIT quotients for linear representations of $SL(2,\mathbb C)$ Let $V$ be the standard two-dimensional representation of $SL(2,\mathbb C)$ and let ${\rm Sym}^2V$ be its symmetric square. Let $n$ be a positive integer and consider the following two representations 1) $V^{\oplus n}$ and 2) $({\rm Sym}^2V)^{\oplus n}$ of $SL(2,\mathbb C)$. Question 1) Is there some explicit description of the GIT quotient $V^{\oplus n}// SL(2,\mathbb C)$? In particular, is it true that the ring of invariant polynomials is generated by $\frac{n(n-1)}{2}$ quadratic polynomials $vol(v_i,v_j)$ $i\ne j$, where $v=(v_1,\cdots ,v_n)=v\in V^{\oplus n}$ and $vol$ is a volume preserved by $SL(2,\mathbb C)$? Question 2). Is there some explicit description of the GIT quotient $({\rm Sym}^2V)^{\oplus n}// SL(2,\mathbb C)$? In particular, is it true that the ring of invariant polynomials is generated by $\frac{n(n+1)}{2}$ quadratic polynomials $(v_i,v_j)$ , where $v=(v_1,\cdots ,v_n)=v\in ({\rm Sym}^2V)^{\oplus n}$ and $(.,.)$ is a symmetric bilinear form on $({\rm Sym}^2V)$ preserved by $SL(2,\mathbb C)$? • Mumford's description of the geometric quotient of $V^{\oplus n}$ by the standard action of $\text{SL}(V)$ in Chapter 3 of GIT is so very explicit that it does not actually use any of the general theory of "Geometric Invariant Theory"! That is important, because Mumford wanted to construct $\mathcal{M}_g$ as a scheme over $\text{Spec}(\mathbb{Z})$, whereas GIT (at that time) only worked over a fixed field. So Mumford uses his explicit quotient construction of $V^{\oplus n}$ over any base as a step in constructing $\mathcal{M}_g$ as a quasi-projective scheme over $\text{Spec}(\mathbb{Z})$. – Jason Starr Apr 4 '17 at 15:06 ## 3 Answers Both questions are extensively dealt with in Weyl's book "Classical invariant theory" who investigated the invariant of classical groups on multiple copies of their defining representations. Determining a set of generators is called a "First Fundamental Theorem" (FFT) while the relations are given in a "Second Fundamental Theorem". Question 1: This is best regarded as the action of $Sp(2n)$ on $V=\mathbb C^{2n}$ for $n=1$. Then, indeed, the ring of invariants is generated by all pairings $f_{ij}:=\omega(v_i,v_j)$. These are neatly organized in a $2n\times 2n$ skew-symmetric matrix. The relations are generated by all "principal" Pfaffian minors of size $(2n+2)\times (2n+2)$. For $n=2$ these are quadratic polynomials in 3 variables called the "Plücker relations". The quotient consists of all skew-symmetric matrices of rank $\le 2$ (which is, of course, the affine cone over a Grassmannian). Question 2: In this case one is dealing with the group $SO(n)$ acting on $\mathbb C^n$ for $n=3$. Here things are a bit more complicated since $SO(n)$ is not strictly a classical group. The better problem is to look at the group $O(n)$ instead. In this case, the invariants are indeed generated by all pairings $p_{ij}=(v_i,v_j)$ which can be organized into a $n\times n$ symmetric matrix. The relations are generated by all $(n+1)\times(n+1)$-principal minors. In your case, the quotient would be the set of symmetric matrices of rank $\le3$. Since you are dealing with $SO(n)$ instead of $O(n)$ things are more complicated. In this case there are additional generating invariants namely all determinants of the form $\det(v_{i_1},\ldots,v_{i_n})$. For $n=3$ the quotient is the subset of $S^2\mathbb C^n\oplus\wedge^3\mathbb C^n$ given be two sets of relations: the rank conditions and the condition that the square of the determinant can be expressend as a Gram matrix $\det((v_{i_\mu},v_{i_\nu}))$. • Thank you! I imagine this is contained in your answer, but is it possible to spell out a bit more explicitly what is the GIT quotient a cone over, in the second case? (if it is again a cone over something) – aglearner Apr 4 '17 at 20:20 • It is not a space one encounters in a Linear Algebra course. Varieties like that have been studied. Search for "determinantal varieties". It carries a $GL(n,\mathbb C)$-action which makes it into a spherical variety with probably 3 orbits. It is a singular with rational singularities (in particular normal and Cohen-Macaulay). – Friedrich Knop Apr 5 '17 at 5:41 • Friedrich, many thanks! I have not realized that this is a spherical variety. I am not worried too much that this variety is not from a linear algebra course :). But I am not able to understand the last three lines of your answer. What are "the rank conditions" and the condition that "the square of the determinant can be expressed as a Gram matrix" ? – aglearner Apr 5 '17 at 9:06 • In fact, I understand that there is a morphism from $\rm Sym^2(V)^{\oplus n}//SL(2,\mathbb C)$ to the cone over the Grassmanian $G(3,n)$. For a generic point the fiber is $6$-dimensional. But I don't see how to get an action with one open orbit on $\rm Sym^2(V)^{\oplus n}//SL(2,\mathbb C)$... So I don't understand what is the spherical variety in this case... – aglearner Apr 5 '17 at 11:38 For question 1, the answer is most easily described by thinking about $V^{\oplus n}$ as $Hom(\mathbb C^2, \mathbb C^n)$. From this it follows that the projective GIT quotient $$V^{\oplus n} //_{det} GL_2$$ is isomorphic to the Grassmannian $\mathbb G(2,n)$. From this, it follows that the invariant ring $\mathbb C[V^{\oplus n}]^{SL_2}$ is homogeneous coordinate ring of $\mathbb G(2,n)$. Your quadratic polynomials are the generators of this coordinate ring (and the relations are given by the Plucker relations). • I suppose the FFT is hidden in the isomorphism with the Grassmannian. – Abdelmalek Abdesselam Apr 4 '17 at 17:58 The answers are: Question 1: yes. Question 2: no. Explicit linear generators follow from the first fundamental theorem (FFT) for $SL_2$. You can see my two answers to this MO question for an explicit proof of the FFT for $SL_k$. It is best to use a graphical language to represent these generators as in my article "On the volume conjecture for classical spin networks". J. Knot Theory Ramifications 21 (2012), no. 3, 1250022. This type of graphical representation is very old as you can see in this MO answer. Then the fun begins, namely trying to find polynomial rather than linear generators. Essentially this results from the Grassmann-Plücker relation. For $SL_2$ and forms of degree 1 or 2 this is easy to do by hand. For quadratics, the GP relation can be used to break cycles (the only thing produced by the FFT). In fact the article by Kempe in the second MO answer I mentioned does exactly that, with explanatory pictures. The first invariant which is not expressible by the ones you gave is for three quadratics corresponding to a 3-cycle containing each one of them. This is also the Wronskian of the three quadratics. For quadratics an explicit system of generators which basically adds these Wronskians for each triple of forms is given in Section 256 "The quadratic types" in the book by Grace and Young. The basic identity from that book needed for breaking cycles of length at least four is in classical symbolic notation: $$2(ab)(bc)(cd)(de)=(bc)(cd)(db)(ae)-(cd)^2(ab)(be)-(bc)^2(ad)(de)+(bd)^2(ac)(ae)$$ where $a=(a_1,a_2)$ etc. and $(ab)$ is the determinant of the matrix with first row $a$ and second row $b$ etc. Using self-duality of $SL_2$ representations, the LHS can be a interpreted as a product of four $2\times 2$ matrices. This is basically the Amitsur-Levitzki Theorem for $2\times 2$ matrices. I don't know how explicit you want to be but you can look at the article "Defining Relations for the Algebra of Invariants of 2×2 Matrices" by Drensky for more details. He treats the case of generic matrices while you are interested in matrices coming from symmetric bilinear forms by self-dualization. For generic matrices, traces of words of length 1, 2, 3 form a minimal system of $n+\frac{n(n-1)}{2}+\frac{n(n-1)(n-2)}{6}$ algebra generators. Drensky finds the polynomial relations between these generators. Here, with quadratic binary forms, the words of length 1 disappear. • Thank you! I guess the article of Drensky indeed addresses the second half of the question. I would be happy if there was some "geometric" description of this GIT quotient akin to the cone over Grassmannian $G(2,n)$ appearing in the part 1) of the question. (But it might be that such a description does not exist...) – aglearner Apr 5 '17 at 3:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9052690267562866, "perplexity": 156.80137491687472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604397.40/warc/CC-MAIN-20200121132900-20200121161900-00148.warc.gz"}
http://cran.fhcrc.org/web/packages/influential/vignettes/Vignettes.html
# 1 Overview influential is an R package mainly for the identification of the most influential nodes in a network as well as the classification and ranking of top candidate features. The influential package contains several functions that could be categorized into five groups according to their purpose: • Network reconstruction • Calculation of centrality measures • Assessment of the association of centrality measures • Identification of the most influential network nodes • Experimental data-based classification and ranking of features The sections below introduce these five categories. However, if you wish not going through all of the functions and their applications, you may skip to any of the novel methods proposed by the influential, including: library(influential) # 2 Network reconstruction Three functions have been obtained from the igraph1 R package for the reconstruction of networks. ## 2.1 From a data frame In the data frame the first and second columns should be composed of source and target nodes. A sample appropriate data frame is brought below: lncRNA Coexpressed.Gene This is a co-expression dataset obtained from a paper by Salavaty et al.2 # Preparing the data MyData <- coexpression.data # Reconstructing the graph My_graph <- graph_from_data_frame(d=MyData) If you look at the class of My_graph you should see that it has an igraph class: class(My_graph) #> [1] "igraph" ## 2.2 From an adjacency matrix A sample appropriate adjacency matrix is brought below: LINC00891 LINC00968 LINC00987 LINC01506 MAFG-AS1 MIR497HG LINC00891 0 1 1 0 0 0 LINC00968 0 0 1 0 0 0 LINC00987 0 1 0 0 0 0 LINC01506 0 0 0 0 0 0 MAFG-AS1 0 0 0 0 0 0 MIR497HG 0 1 1 0 0 0 • Note that the matrix has the same number of rows and columns. # Preparing the data # Reconstructing the graph My_graph <- graph_from_adjacency_matrix(MyData) ## 2.3 From an incidence matrix A sample appropriate incidence matrix is brought below: Gene_1 Gene_2 Gene_3 Gene_4 Gene_5 cell_1 0 1 1 0 1 cell_2 1 1 1 0 0 cell_3 1 1 1 0 0 cell_4 0 0 0 1 0 # Reconstructing the graph My_graph <- graph_from_adjacency_matrix(MyData) ## 2.4 From a SIF file SIF is the common output format of the Cytoscape software. # Reconstructing the graph My_graph <- sif2igraph(Path = "Sample_SIF.sif") class(My_graph) #> [1] "igraph" # 3 Calculation of centrality measures To calculate the centrality of nodes within a network several different options are available. The following sections describe how to obtain the names of network nodes and use different functions to calculate the centrality of nodes within a network. Although several centrality functions are provided, we recommend the IVI for the identification of the most influential nodes within a network. By the way, the results of all of the following centrality functions could be conveniently illustrated using the centrality-based network visualization function. ## 3.1 Network vertices Network vertices (nodes) are required in order to calculate their centrality measures. Thus, before calculation of network centrality measures we need to obtain the name of required network vertices. To this end, we use the V function, which is obtained from the igraph package. However, you may provide a character vector of the name of your desired nodes manually. • Note in many of the centrality index functions the entire network nodes are assessed if no vector of desired vertices is provided. # Preparing the data MyData <- coexpression.data # Reconstructing the graph My_graph <- graph_from_data_frame(MyData) # Extracting the vertices My_graph_vertices <- V(My_graph) #> + 6/794 vertices, named, from 775cff6: #> [1] ADAMTS9-AS2 C8orf34-AS1 CADM3-AS1 FAM83A-AS1 FENDRR LANCL1-AS1 ## 3.2 Degree centrality Degree centrality is the most commonly used local centrality measure which could be calculated via the degree function obtained from the igraph package. # Preparing the data MyData <- coexpression.data # Reconstructing the graph My_graph <- graph_from_data_frame(MyData) # Extracting the vertices GraphVertices <- V(My_graph) # Calculating degree centrality My_graph_degree <- degree(My_graph, v = GraphVertices, normalized = FALSE) #> 172 121 168 26 189 176 Degree centrality could be also calculated for directed graphs via specifying the mode parameter. ## 3.3 Betweenness centrality Betweenness centrality, like degree centrality, is one of the most commonly used centrality measures but is representative of the global centrality of a node. This centrality metric could also be calculated using a function obtained from the igraph package. # Preparing the data MyData <- coexpression.data # Reconstructing the graph My_graph <- graph_from_data_frame(MyData) # Extracting the vertices GraphVertices <- V(My_graph) # Calculating betweenness centrality My_graph_betweenness <- betweenness(My_graph, v = GraphVertices, directed = FALSE, normalized = FALSE) #> 21719.857 28185.199 26946.625 2940.467 33333.369 21830.511 Betweenness centrality could be also calculated for directed and/or weighted graphs via specifying the directed and weights parameters, respectively. ## 3.4 Neighborhood connectivity Neighborhood connectivity is one of the other important centrality measures that reflect the semi-local centrality of a node. This centrality measure was first represented in a Science paper3 in 2002 and is for the first time calculable in R environment via the influential package. # Preparing the data MyData <- coexpression.data # Reconstructing the graph My_graph <- graph_from_data_frame(MyData) # Extracting the vertices GraphVertices <- V(My_graph) # Calculating neighborhood connectivity neighrhood.co <- neighborhood.connectivity(graph = My_graph, vertices = GraphVertices, mode = "all") #> 11.290698 4.983471 7.970238 3.000000 15.153439 13.465909 Neighborhood connectivity could be also calculated for directed graphs via specifying the mode parameter. ## 3.5 H-index H-index is H-index is another semi-local centrality measure that was inspired from its application in assessing the impact of researchers and is for the first time calculable in R environment via the influential package. # Preparing the data MyData <- coexpression.data # Reconstructing the graph My_graph <- graph_from_data_frame(MyData) # Extracting the vertices GraphVertices <- V(My_graph) # Calculating H-index h.index <- h_index(graph = My_graph, vertices = GraphVertices, mode = "all") #> 11 9 11 2 12 12 H-index could be also calculated for directed graphs via specifying the mode parameter. ## 3.6 Local H-index Local H-index (LH-index) is a semi-local centrality measure and an improved version of H-index centrality that leverages the H-index to the second order neighbors of a node and is for the first time calculable in R environment via the influential package. # Preparing the data MyData <- coexpression.data # Reconstructing the graph My_graph <- graph_from_data_frame(MyData) # Extracting the vertices GraphVertices <- V(My_graph) # Calculating Local H-index lh.index <- lh_index(graph = My_graph, vertices = GraphVertices, mode = "all") #> 1165 446 994 34 1289 1265 Local H-index could be also calculated for directed graphs via specifying the mode parameter. ## 3.7 Collective Influence Collective Influence (CI) is a global centrality measure that calculates the product of the reduced degree (degree - 1) of a node and the total reduced degree of all nodes at a distance d from the node. This centrality measure is for the first time provided in an R package. # Preparing the data MyData <- coexpression.data # Reconstructing the graph My_graph <- graph_from_data_frame(MyData) # Extracting the vertices GraphVertices <- V(My_graph) # Calculating Collective Influence ci <- collective.influence(graph = My_graph, vertices = GraphVertices, mode = "all", d=3) #> 9918 70560 39078 675 10716 7350 Collective Influence could be also calculated for directed graphs via specifying the mode parameter. ## 3.8 ClusterRank ClusterRank is a local centrality measure that makes a connection between local and semi-local characteristics of a node and at the same time removes the negative effects of local clustering. # Preparing the data MyData <- coexpression.data # Reconstructing the graph My_graph <- graph_from_data_frame(MyData) # Extracting the vertices GraphVertices <- V(My_graph) # Calculating ClusterRank cr <- clusterRank(graph = My_graph, vids = GraphVertices, directed = FALSE, loops = TRUE) #> 63.459812 5.185675 21.111776 1.280000 135.098278 81.255195 ClusterRank could be also calculated for directed graphs via specifying the directed parameter. # 4 Assessment of the association of centrality measures ## 4.1 Conditional probability of deviation from means The function cond.prob.analysis assesses the conditional probability of deviation of two centrality measures (or any other two continuous variables) from their corresponding means in opposite directions. # Preparing the data MyData <- centrality.measures # Assessing the conditional probability My.conditional.prob <- cond.prob.analysis(data = MyData, nodes.colname = rownames(MyData), Desired.colname = "BC", Condition.colname = "NC") print(My.conditional.prob) #> $ConditionalProbability #> [1] 51.61871 #> #>$ConditionalProbability_split.half.sample #> [1] 51.73611 • As you can see in the results, the whole data is also randomly splitted into half in order to further test the validity of conditional probability assessments. • The higher the conditional probability the more two centrality measures behave in contrary manners. ## 4.2 Nature of association (considering dependent and independent) The function double.cent.assess could be used to automatically assess both the distribution mode of centrality measures (two continuous variables) and the nature of their association. The analyses done through this formula are as follows: 1. Normality assessment: • Variables with lower than 5000 observations: Shapiro-Wilk test • Variables with over 5000 observations: Anderson-Darling test 2. Assessment of non-linear/non-monotonic correlation: • Non-linearity assessment: Fitting a generalized additive model (GAM) with integrated smoothness approximations using the mgcv package • Non-monotonicity assessment: Comparing the squared coefficients of the correlation based on Spearman’s rank correlation analysis and ranked regression test with non-linear splines. • Squared coefficient of Spearman’s rank correlation > R-squared ranked regression with non-linear splines: Monotonic • Squared coefficient of Spearman’s rank correlation < R-squared ranked regression with non-linear splines: Non-monotonic 3. Dependence assessment: • Hoeffding’s independence test: Hoeffding’s test of independence is a test based on the population measure of deviation from independence which computes a D Statistics ranging from -0.5 to 1: Greater D values indicate a higher dependence between variables. • Descriptive non-linear non-parametric dependence test: This assessment is based on non-linear non-parametric statistics (NNS) which outputs a dependence value ranging from 0 to 1. For further details please refer to the NNS R package4: Greater values indicate a higher dependence between variables. 4. Correlation assessment: As the correlation between most of the centrality measures follows a non-monotonic form, this part of the assessment is done based on the NNS statistics which itself calculates the correlation based on partial moments and outputs a correlation value ranging from -1 to 1. For further details please refer to the NNS R package. 5. Assessment of conditional probability of deviation from means This step assesses the conditional probability of deviation of two centrality measures (or any other two continuous variables) from their corresponding means in opposite directions. • The independent centrality measure (variable) is considered as the condition variable and the other as the desired one. • As you will see in the results, the whole data is also randomly splitted into half in order to further test the validity of conditional probability assessments. • The higher the conditional probability the more two centrality measures behave in contrary manners. # Preparing the data MyData <- centrality.measures # Association assessment My.metrics.assessment <- double.cent.assess(data = MyData, nodes.colname = rownames(MyData), dependent.colname = "BC", independent.colname = "NC") print(My.metrics.assessment) #> $Summary_statistics #> BC NC #> Min. 0.000000000 1.2000 #> 1st Qu. 0.000000000 66.0000 #> Median 0.000000000 156.0000 #> Mean 0.005813357 132.3443 #> 3rd Qu. 0.000340000 179.3214 #> Max. 0.529464720 192.0000 #> #>$Normality_results #> p.value #> BC 1.415450e-50 #> NC 9.411737e-30 #> #> $Dependent_Normality #> [1] "Non-normally distributed" #> #>$Independent_Normality #> [1] "Non-normally distributed" #> #> $GAM_nonlinear.nonmonotonic.results #> edf p-value #> 8.992406 0.000000 #> #>$Association_type #> [1] "nonlinear-nonmonotonic" #> #> $HoeffdingD_Statistic #> D_statistic P_value #> Results 0.01770279 1e-08 #> #>$Dependence_Significance #> Hoeffding #> Results Significantly dependent #> #> $NNS_dep_results #> Correlation Dependence #> Results -0.7948106 0.8647164 #> #>$ConditionalProbability #> [1] 55.35386 #> #> $ConditionalProbability_split.half.sample #> [1] 55.90331 Note: It should also be noted that as a single regression line does not fit all models with a certain degree of freedom, based on the size and correlation mode of the variables provided, this function might return an error due to incapability of running step 2. In this case, you may follow each step manually or as an alternative run the other function named double.cent.assess.noRegression which does not perform any regression test and consequently it is not required to determine the dependent and independent variables. ## 4.3 Nature of association (without considering dependence direction) The function double.cent.assess.noRegression could be used to automatically assess both the distribution mode of centrality measures (two continuous variables) and the nature of their association. The analyses done through this formula are as follows: 1. Normality assessment: • Variables with lower than 5000 observations: Shapiro-Wilk test • Variables with over 5000 observations: Anderson–Darling test 2. Dependence assessment: • Hoeffding’s independence test: Hoeffding’s test of independence is a test based on the population measure of deviation from independence which computes a D Statistics ranging from -0.5 to 1: Greater D values indicate a higher dependence between variables. • Descriptive non-linear non-parametric dependence test: This assessment is based on non-linear non-parametric statistics (NNS) which outputs a dependence value ranging from 0 to 1. For further details please refer to the NNS R package: Greater values indicate a higher dependence between variables. 3. Correlation assessment: As the correlation between most of the centrality measures follows a non-monotonic form, this part of the assessment is done based on the NNS statistics which itself calculates the correlation based on partial moments and outputs a correlation value ranging from -1 to 1. For further details please refer to the NNS R package. 4. Assessment of conditional probability of deviation from means This step assesses the conditional probability of deviation of two centrality measures (or any other two continuous variables) from their corresponding means in opposite directions. • The centrality2 variable is considered as the condition variable and the other (centrality1) as the desired one. • As you will see in the results, the whole data is also randomly splitted into half in order to further test the validity of conditional probability assessments. • The higher the conditional probability the more two centrality measures behave in contrary manners. # Preparing the data MyData <- centrality.measures # Association assessment My.metrics.assessment <- double.cent.assess.noRegression(data = MyData, nodes.colname = rownames(MyData), centrality1.colname = "BC", centrality2.colname = "NC") print(My.metrics.assessment) #>$Summary_statistics #> BC NC #> Min. 0.000000000 1.2000 #> 1st Qu. 0.000000000 66.0000 #> Median 0.000000000 156.0000 #> Mean 0.005813357 132.3443 #> 3rd Qu. 0.000340000 179.3214 #> Max. 0.529464720 192.0000 #> #> $Normality_results #> p.value #> BC 1.415450e-50 #> NC 9.411737e-30 #> #>$Centrality1_Normality #> [1] "Non-normally distributed" #> #> $Centrality2_Normality #> [1] "Non-normally distributed" #> #>$HoeffdingD_Statistic #> D_statistic P_value #> Results 0.01770279 1e-08 #> #> $Dependence_Significance #> Hoeffding #> Results Significantly dependent #> #>$NNS_dep_results #> Correlation Dependence #> Results -0.7948106 0.8647164 #> #> $ConditionalProbability #> [1] 55.35386 #> #>$ConditionalProbability_split.half.sample #> [1] 55.68163 # 5 Identification of the most influential network nodes IVI : IVI is the first integrative method for the identification of network most influential nodes in a way that captures all network topological dimensions. The IVI formula integrates the most important local (i.e. degree centrality and ClusterRank), semi-local (i.e. neighborhood connectivity and local H-index) and global (i.e. betweenness centrality and collective influence) centrality measures in such a way that both synergizes their effects and removes their biases. ## 5.1 Integrated Value of Influence (IVI) from centrality measures # Preparing the data MyData <- centrality.measures # Calculation of IVI My.vertices.IVI <- ivi.from.indices(DC = MyData$DC, CR = MyData$CR, NC = MyData$NC, LH_index = MyData$LH_index, BC = MyData$BC, CI = MyData$CI) #> [1] 24.670056 8.344337 18.621049 1.017768 29.437028 33.512598 ## 5.2 Integrated Value of Influence (IVI) from a graph # Preparing the data MyData <- coexpression.data # Reconstructing the graph My_graph <- graph_from_data_frame(MyData) # Extracting the vertices GraphVertices <- V(My_graph) # Calculation of IVI My.vertices.IVI <- ivi(graph = My_graph, vertices = GraphVertices, weights = NULL, directed = FALSE, mode = "all", loops = TRUE, d = 3, scaled = TRUE) #> 39.53878 19.94999 38.20524 1.12371 100.00000 47.49356 IVI could be also calculated for directed and/or weighted graphs via specifying the directed, mode, and weights parameters. Check out our paper5 for a more complete description of the IVI formula and all of its underpinning methods and analyses. The following tutorial video demonstrates how to simply calculate the IVI value of all of the nodes within a network. ## 5.3 Network visualization The cent_network.vis is a function for the visualization of a network based on applying a centrality measure to the size and color of network nodes. The centrality of network nodes could be calculated by any means and based on any centrality index. Here, we demonstrate the visualization of a network according to IVI values. # Reconstructing the graph set.seed(70) My_graph <- igraph::sample_gnm(n = 50, m = 120, directed = TRUE) # Calculating the IVI values My_graph_IVI <- ivi(My_graph, directed = TRUE) # Visualizing the graph based on IVI values My_graph_IVI_Vis <- cent_network.vis(graph = My_graph, cent.metric = My_graph_IVI, directed = TRUE, plot.title = "IVI-based Network", legend.title = "IVI value") My_graph_IVI_Vis The above figure illustrates a simple use case of the function cent_network.vis. You can apply this function to directed/undirected and/or weighted/unweighted networks. Also, a complete flexibility (list of arguments) have been provided for the adjustment of colors, transparencies, sizes, titles, etc. Additionally, several different layouts have been provided that could be conveniently applied to a network. In the case of highly crowded networks, the “grid” layout would be most appropriate. The following tutorial video demonstrates how to visualize a network based on the centrality of nodes (e.g. their IVI values). ## 5.4 IVI shiny app A shiny app has also been developed for the calculation of IVI as well as IVI-based network visualization, which is accessible using the following command. influential::runExample("IVI") You can also access the shiny app online at the Influential Software Package server. # 6 Identification of the most important network spreaders Sometimes we seek to identify not necessarily the most influential nodes but the nodes with most potential in spreading of information throughout the network. Spreading score : spreading.score is an integrative score made up of four different centrality measures including ClusterRank, neighborhood connectivity, betweenness centrality, and collective influence. Also, Spreading score reflects the spreading potential of each node within a network and is one of the major components of the IVI. # Preparing the data MyData <- coexpression.data # Reconstructing the graph My_graph <- graph_from_data_frame(MyData) # Extracting the vertices GraphVertices <- V(My_graph) vertices = GraphVertices, weights = NULL, directed = FALSE, mode = "all", loops = TRUE, d = 3, scaled = TRUE) #> 42.932497 38.094111 45.114648 1.587262 100.000000 49.193292 Spreading score could be also calculated for directed and/or weighted graphs via specifying the directed, mode, and weights parameters. The results could be conveniently illustrated using the centrality-based network visualization function. # 7 Identification of the most important network hubs In some cases we want to identify not the nodes with the most sovereignty in their surrounding local environments. Hubness score : hubness.score is an integrative score made up of two different centrality measures including degree centrality and local H-index. Also, Hubness score reflects the power of each node in its surrounding environment and is one of the major components of the IVI. # Preparing the data MyData <- coexpression.data # Reconstructing the graph My_graph <- graph_from_data_frame(MyData) # Extracting the vertices GraphVertices <- V(My_graph) # Calculation of Hubness score Hubness.score <- hubness.score(graph = My_graph, vertices = GraphVertices, directed = FALSE, mode = "all", loops = TRUE, scaled = TRUE) #> 84.299719 46.741660 77.441514 8.437142 92.870451 88.734131 Spreading score could be also calculated for directed graphs via specifying the directed and mode parameters. The results could be conveniently illustrated using the centrality-based network visualization function. # 8 Ranking the influence of nodes on the topology of a network based on the SIRIR model SIRIR : SIRIR is achieved by the integration of susceptible-infected-recovered (SIR) model with the leave-one-out cross validation technique and ranks network nodes based on their true universal influence on the network topology and spread of information. One of the applications of this function is the assessment of performance of a novel algorithm in identification of network influential nodes. # Reconstructing the graph My_graph <- sif2igraph(Path = "Sample_SIF.sif") # Extracting the vertices GraphVertices <- V(My_graph) # Calculation of influence rank Influence.Ranks <- sirir(graph = My_graph, vertices = GraphVertices, beta = 0.5, gamma = 1, no.sim = 10, seed = 1234) difference.value rank MRAP 49.7 1 FOXM1 49.5 2 POSTN 49.4 4 CDC7 49.3 5 ZWINT 42.1 6 MKI67 41.9 7 FN1 41.9 7 ASPM 41.8 9 ANLN 41.8 9 # 9 Experimental data-based classification and ranking of top candidate features ExIR : ExIR is a model for the classification and ranking of top candidate features. The input data could come from any type of experiment such as transcriptomics and proteomics. This model is based on multi-level filtration and scoring based on several supervised and unsupervised analyses followed by the classification and integrative ranking of top candidate features. Using this function and depending on the input data and specified arguments, the user can get a graph object and one to four tables including: • Drivers: Prioritized drivers are supposed to have the highest impact on the progression of a biological process or disease under investigation. • Biomarkers: Prioritized biomarkers are supposed to have the highest sensitivity to different conditions under investigation and the severity of each condition. • DE-mediators: Prioritized DE-mediators are those features that are differentially expressed/abundant but in a fluctuating manner and play mediatory roles between drivers. • nonDE-mediators: Prioritized nonDE-mediators are those features that are not differentially expressed/abundant but have associations with and play mediatory roles between drivers. First, prepare your data. Suppose we have the data for time-course transcriptomics and we have previously performed differential expression analysis for each step-wise pair of time-points. Also, we have performed trajectory analysis to identify the genes that have significant alterations across all time-points. # Prepare sample data gene.names <- paste("gene", c(1:20000), sep = "_") set.seed(60) tp2.vs.tp1.DEGs <- data.frame(logFC = rnorm(n = 700, mean = 2, sd = 4), FDR = runif(n = 700, min = 0.0001, max = 0.049)) set.seed(60) rownames(tp2.vs.tp1.DEGs) <- sample(gene.names, size = 700) set.seed(70) tp3.vs.tp2.DEGs <- data.frame(logFC = rnorm(n = 1300, mean = -1, sd = 5), FDR = runif(n = 1300, min = 0.0011, max = 0.039)) set.seed(70) rownames(tp3.vs.tp2.DEGs) <- sample(gene.names, size = 1300) set.seed(80) regression.data <- data.frame(R_squared = runif(n = 800, min = 0.1, max = 0.85)) set.seed(80) rownames(regression.data) <- sample(gene.names, size = 800) ## 9.1 Assembling the Diff_data Use the function diff_data.assembly to automatically generate the Diff_data table for the ExIR model. my_Diff_data <- diff_data.assembly(tp2.vs.tp1.DEGs, tp3.vs.tp2.DEGs, regression.data) my_Diff_data[c(1:10),] Have a look at the top 10 rows of the Diff_data data frame: Diff_value1 Sig_value1 Diff_value2 Sig_value2 Diff_value3 gene_17331 4.9 0 0 1 0 gene_12546 4.0 0 0 1 0 gene_12837 -0.3 0 0 1 0 gene_18522 1.4 0 0 1 0 gene_6260 -4.9 0 0 1 0 gene_2722 -4.9 0 0 1 0 gene_19882 6.3 0 0 1 0 gene_2790 3.3 0 0 1 0 gene_17011 -1.6 0 0 1 0 gene_8321 3.8 0 0 1 0 ## 9.2 Preparing the Exptl_data Now, prepare a sample normalized experimental data matrix set.seed(60) MyExptl_data <- matrix(data = runif(n = 1000000, min = 2, max = 300), nrow = 50, ncol = 20000, dimnames = list(c(paste("cancer_sample", c(1:25), sep = "_"), paste("normal_sample", c(1:25), sep = "_")), gene.names)) # Log transform the data to bring them closer to normal distribution MyExptl_data <- log2(MyExptl_data) MyExptl_data[c(1:5, 45:50),c(1:5)] Have a look at top 5 cancer and normal samples (rows) of the Exptl_data: gene_1 gene_2 gene_3 gene_4 gene_5 cancer_sample_1 8 8 8 8 8 cancer_sample_2 7 8 6 8 8 cancer_sample_3 8 7 8 7 8 cancer_sample_4 8 7 7 7 8 cancer_sample_5 6 4 8 5 4 normal_sample_20 8 7 7 8 8 normal_sample_21 8 7 8 6 8 normal_sample_22 8 8 8 7 6 normal_sample_23 7 6 8 7 8 normal_sample_24 8 8 7 5 7 normal_sample_25 5 7 8 8 6 Now add the “condition” column to the Exptl_data table. MyExptl_data <- as.data.frame(MyExptl_data) MyExptl_data$condition <- c(rep("C", 25), rep("N", 25)) ## 9.3 Running the ExIR model Finally, prepare the other required input data for the ExIR model. #The table of differential/regression previously prepared my_Diff_data #The column indices of differential values in the Diff_data table Diff_value <- c(1,3) #The column indices of regression values in the Diff_data table Regr_value <- 5 #The column indices of significance (P-value/FDR) values in # the Diff_data table Sig_value <- c(2,4) #The matrix/data frame of normalized experimental # data previously prepared MyExptl_data #The name of the column delineating the conditions of # samples in the Exptl_data matrix Condition_colname <- "condition" #The desired list of features set.seed(60) MyDesired_list <- sample(gene.names, size = 1000) #Optional #Running the ExIR model My.exir <- exir(Desired_list = MyDesired_list, Diff_data = my_Diff_data, Diff_value = Diff_value, Regr_value = Regr_value, Sig_value = Sig_value, Exptl_data = MyExptl_data, Condition_colname = Condition_colname, seed = 60, verbose = FALSE) names(My.exir) #> [1] "Driver table" "nonDE-mediator table" "Biomarker table" "Graph" class(My.exir) #> [1] "ExIR_Result" Have a look at the heads of the output tables of ExIR: • Drivers Score Z.score Rank P.value P.adj Type gene_9469 100 13 1 1e-37 0 Accelerator gene_339 73 9 2 2e-20 0 Accelerator gene_6518 67 8 3 1e-17 0 Decelerator gene_13429 60 8 4 1e-14 0 Accelerator gene_8733 58 7 5 2e-13 0 Accelerator gene_15640 57 7 6 3e-13 0 Accelerator • Biomarkers Score Z.score Rank P.value P.adj Type gene_27 100 25.3 1 3e-141 0.0 Up-regulated gene_2464 32 7.8 2 4e-15 0.0 Down-regulated gene_6903 5 0.9 3 2e-01 0.5 Up-regulated gene_3196 4 0.7 4 2e-01 0.5 Up-regulated gene_13177 3 0.6 5 3e-01 0.5 Up-regulated gene_2663 3 0.5 6 3e-01 0.5 Up-regulated • nonDE-mediators Score Z.score Rank P.value P.adj gene_28 100 2.0 1 0.02 0.1 gene_13648 1 -0.4 2 0.66 0.7 gene_19101 1 -0.4 2 0.66 0.7 gene_15735 1 -0.4 2 0.66 0.7 gene_9637 1 -0.4 2 0.66 0.7 gene_2841 1 -0.4 2 0.66 0.7 • nonDE-mediators Score Z.score Rank P.value P.adj gene_117 100.00000 5.236991 1 8.160804e-08 0.0000071 gene_516 57.59469 2.693473 2 3.535593e-03 0.1468437 gene_262 54.38275 2.500817 3 6.195351e-03 0.1468437 gene_974 53.87269 2.470223 4 6.751434e-03 0.1468437 gene_441 46.60681 2.034408 5 2.095524e-02 0.3646212 gene_533 40.91995 1.693304 6 4.519882e-02 0.6553829 The following tutorial video demonstrates how to run the ExIR model on a sample experimental data. You can also computationally simulate knockout and/or up-regulation of the top candidate features outputted by ExIR to evaluate the impact of their manipulations on the flow of information/signaling and integrity of the network prior to taking them to your lab bench. ## 9.4 ExIR visualization The exir.vis is a function for the visualization of the output of the ExIR model. The function simply gets the output of the ExIR model as a single argument and returns a plot of the top 10 prioritized features of all classes. Here, we visualize the top five candidates of the results of the ExIR model obtained in the previous step . My.exir.Vis <- exir.vis(exir.results = My.exir, n = 5, y.axis.title = "Gene") My.exir.Vis However, a complete flexibility (list of arguments) has been provided for the adjustment of all of the visual features of the plot and selection of the desired classes, feature types, and the number of top candidates. The following tutorial video demonstrates how to visualize the results of ExIR model. ## 9.5 ExIR shiny app A shiny app has also been developed for Running the ExIR model, visualization of its results as well as computational simulation of knockout and/or up-regulation of its top candidate outputs, which is accessible using the following command. influential::runExample("ExIR") You can also access the shiny app online at the Influential Software Package server. Back to top # 10 Computational manipulation of cells The comp_manipulate is a function for the simulation of feature (gene, protein, etc.) knockout and/or up-regulation in cells. This function works based on the SIRIR (SIR-based Influence Ranking) model and could be applied on the output of the ExIR model or any other independent association network. For feature (gene/protein/etc.) knockout the SIRIR model is used to remove the feature from the network and assess its impact on the flow of information (signaling) within the network. On the other hand, in case of up-regulation a node similar to the desired node is added to the network with exactly the same connections (edges) as of the original node. Next, the SIRIR model is used to evaluate the difference in the flow of information/signaling after adding (up-regulating) the desired feature/node compared with the original network. In case you are applying this function on the output of ExIR model, you may note that as the gene/protein knockout would impact on the integrity of the under-investigation network as well as the networks of other overlapping biological processes/pathways, it is recommended to select those features that simultaneously have the highest (most significant) ExIR-based rank and lowest knockout rank. In contrast, as the up-regulation would not affect the integrity of the network, you may select the features with highest (most significant) ExIR-based and up-regulation-based ranks. Altogether, it is recommended to select the features with the highest (most significant) ExIR-based (major drivers or mediators of the under-investigation biological process/disease) and Up-regulation-based (having higher impact on the signaling within the under-investigation network when up-regulated) ranks, but with the lowest Knockout-based rank (having the lowest disturbance to the under-investigation as well as other overlapping networks). Below is an example of running this function on the same ExIR output generated above. # Select which genes to knockout set.seed(60) ko_vertices <- sample(igraph::as_ids(V(My.exir$Graph)), size = 5) # Select which genes to up-regulate set.seed(1234) upregulate_vertices <- sample(igraph::as_ids(V(My.exir\$Graph)), size = 5) Computational_manipulation <- comp_manipulate(exir_output = My.exir, ko_vertices = ko_vertices, upregulate_vertices = upregulate_vertices, beta = 0.5, gamma = 1, no.sim = 100, seed = 1234) Have a look at the heads of the output tables: • Knockout Feature_name Rank Manipulation_type 1 gene_1877 1 Knockout 5 gene_9469 2 Knockout 4 gene_2841 3 Knockout 2 gene_13430 4 Knockout 3 gene_6197 5 Knockout • Up-regulation Feature_name Rank Manipulation_type 5 gene_2841 1 Up-regulation 1 gene_8733 2 Up-regulation 3 gene_9469 2 Up-regulation 2 gene_13430 4 Up-regulation 4 gene_16640 5 Up-regulation • Combined Feature_name Rank Manipulation_type 51 gene_2841 1 Up-regulation 11 gene_8733 2 Up-regulation 31 gene_9469 2 Up-regulation 21 gene_13430 4 Up-regulation 41 gene_16640 5 Up-regulation 1 gene_1877 6 Knockout
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5633314847946167, "perplexity": 5248.549672050978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039596883.98/warc/CC-MAIN-20210423161713-20210423191713-00529.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcds.2017277
# American Institute of Mathematical Sciences December  2017, 37(12): 6383-6403. doi: 10.3934/dcds.2017277 ## Random pullback exponential attractors: General existence results for random dynamical systems in Banach spaces 1 Departamento Ecuaciones Diferenciales y Análisis Numérico, Universidad de Sevilla, C/ Tarfia s/n, 41012 Sevilla, Spain 2 Institut für Mathematik und Wissenschaftliches Rechnen, Karl-Franzens-Universität Graz, Heinrichstr. 36, 8010 Graz, Austria * Corresponding author: Stefanie Sonner Received  March 2017 Revised  July 2017 Published  August 2017 Fund Project: The first author was partially supported by FEDER (EU) and Ministerio de Economía y Competitividad (Spain) grant MTM2015-63723-P and by the Junta de Andalucía under the Proyecto de Excelencia P12-FQM-1492. We derive general existence theorems for random pullback exponential attractors and deduce explicit bounds for their fractal dimension. The results are formulated for asymptotically compact random dynamical systems in Banach spaces. Citation: Tomás Caraballo, Stefanie Sonner. Random pullback exponential attractors: General existence results for random dynamical systems in Banach spaces. Discrete & Continuous Dynamical Systems, 2017, 37 (12) : 6383-6403. doi: 10.3934/dcds.2017277 ##### References: show all references ##### References: [1] Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2887-2914. doi: 10.3934/dcds.2016.36.2887 [2] Shulin Wang, Yangrong Li. Probabilistic continuity of a pullback random attractor in time-sample. Discrete & Continuous Dynamical Systems - B, 2020, 25 (7) : 2699-2772. doi: 10.3934/dcdsb.2020028 [3] Junyi Tu, Yuncheng You. Random attractor of stochastic Brusselator system with multiplicative noise. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2757-2779. doi: 10.3934/dcds.2016.36.2757 [4] Mustapha Yebdri. Existence of $\mathcal{D}-$pullback attractor for an infinite dimensional dynamical system. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021036 [5] Zhaojuan Wang, Shengfan Zhou. Random attractor and random exponential attractor for stochastic non-autonomous damped cubic wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4767-4817. doi: 10.3934/dcds.2018210 [6] Xingni Tan, Fuqi Yin, Guihong Fan. Random exponential attractor for stochastic discrete long wave-short wave resonance equation with multiplicative white noise. Discrete & Continuous Dynamical Systems - B, 2020, 25 (8) : 3153-3170. doi: 10.3934/dcdsb.2020055 [7] Yujun Zhu. Preimage entropy for random dynamical systems. Discrete & Continuous Dynamical Systems, 2007, 18 (4) : 829-851. doi: 10.3934/dcds.2007.18.829 [8] Yuncheng You. Random attractor for stochastic reversible Schnackenberg equations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1347-1362. doi: 10.3934/dcdss.2014.7.1347 [9] Min Zhao, Shengfan Zhou. Random attractor for stochastic Boissonade system with time-dependent deterministic forces and white noises. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1683-1717. doi: 10.3934/dcdsb.2017081 [10] Xiangnan He, Wenlian Lu, Tianping Chen. On transverse stability of random dynamical system. Discrete & Continuous Dynamical Systems, 2013, 33 (2) : 701-721. doi: 10.3934/dcds.2013.33.701 [11] Boling Guo, Yongqian Han, Guoli Zhou. Random attractor for the 2D stochastic nematic liquid crystals flows. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2349-2376. doi: 10.3934/cpaa.2019106 [12] Chi Phan. Random attractor for stochastic Hindmarsh-Rose equations with multiplicative noise. Discrete & Continuous Dynamical Systems - B, 2020, 25 (8) : 3233-3256. doi: 10.3934/dcdsb.2020060 [13] Julian Newman. Synchronisation of almost all trajectories of a random dynamical system. Discrete & Continuous Dynamical Systems, 2020, 40 (7) : 4163-4177. doi: 10.3934/dcds.2020176 [14] Wen Tan. The regularity of pullback attractor for a non-autonomous p-Laplacian equation with dynamical boundary condition. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 529-546. doi: 10.3934/dcdsb.2018194 [15] Felix X.-F. Ye, Hong Qian. Stochastic dynamics Ⅱ: Finite random dynamical systems, linear representation, and entropy production. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4341-4366. doi: 10.3934/dcdsb.2019122 [16] Ahmed Y. Abdallah. Upper semicontinuity of the attractor for a second order lattice dynamical system. Discrete & Continuous Dynamical Systems - B, 2005, 5 (4) : 899-916. doi: 10.3934/dcdsb.2005.5.899 [17] Markus Böhm, Björn Schmalfuss. Bounds on the Hausdorff dimension of random attractors for infinite-dimensional random dynamical systems on fractals. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3115-3138. doi: 10.3934/dcdsb.2018303 [18] Timothy Chumley, Renato Feres. Entropy production in random billiards. Discrete & Continuous Dynamical Systems, 2021, 41 (3) : 1319-1346. doi: 10.3934/dcds.2020319 [19] Zhaojuan Wang, Shengfan Zhou. Random attractor for stochastic non-autonomous damped wave equation with critical exponent. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 545-573. doi: 10.3934/dcds.2017022 [20] Fuzhi Li, Dongmei Xu, Jiali Yu. Regular measurable backward compact random attractor for $g$-Navier-Stokes equation. Communications on Pure & Applied Analysis, 2020, 19 (6) : 3137-3157. doi: 10.3934/cpaa.2020136 2020 Impact Factor: 1.392
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4138241708278656, "perplexity": 5626.730082371645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585270.40/warc/CC-MAIN-20211019140046-20211019170046-00413.warc.gz"}
https://pure.ewha.ac.kr/en/publications/temporal-variations-and-characteristics-of-the-carbonaceous-speci
# Temporal variations and characteristics of the carbonaceous species in PM2.5 measured at anmyeon island, a background site in Korea Jong Sik Lee, Eun Sil Kim, Ki Ae Kim, Jian Zhen Yu, Yong Pyo Kim, Chang Hoon Jung, Ji Yi Lee Research output: Contribution to journalArticlepeer-review 1 Scopus citations ## Abstract Routine measurements of carbonaceous species in PM2.5 inculidng organic carbon (OC), elemental carbon (EC), water-soluble organic carbon (WSOC), and humiclike-substance carbon (HULIS-C) in PM2.5 were performed at Anmyeon Island (AI) to clarify the seasonal variation and carbonaceous aerosol concentrations at a background site in Korea between 2015 and 2016. The annual average OC and EC concentrations were 4.52±3.25 μg/m3 and 0.46±0.28 μg/m3, respectively, and there were no clear seasonal variations in OC and EC concentrations. The average concentrations of WSOC and waterinsoluble organic carbon (WISOC) were 2.56±1.95 μg/m3 and 1.96±1.45 μg/m3, respectively, and their composition in OC showed high temporal variations. A low correlation between WISOC and EC was observed, while WSOC concentrations were highly correlated with secondary organic carbon concentrations, which were estimated using the EC tracer method. The results indicate that the formation of secondary organic aerosols is a major factor for the determination of WSOC concentrations in this region. HULIS-C was the major component of WSOC, accounting for 39-99% of WSOC and the average concentration was 1.88±1.52 μg/m3. Two distinct periods with high carbonaceous speciess in PM2.5 were observed and characterized by their OC/EC ratios. The high concentration of OC with high ratio of OC/EC was due to the influence of a mixture of emissions from biomass burning and secondary formation transported from outside AI. While, the high concentrations of OC and EC with low OC/EC ratio were related to local vehicular emissions. Original language English 35-46 12 Asian Journal of Atmospheric Environment 14 1 https://doi.org/10.5572/AJAE.2020.14.1.035 Published - 1 Mar 2020 ## Keywords • Biomass burning • Carbonaceous species • GAW satation • HULIS • PM ## Fingerprint Dive into the research topics of 'Temporal variations and characteristics of the carbonaceous species in PM2.5 measured at anmyeon island, a background site in Korea'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8793606758117676, "perplexity": 16546.236494801928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103360935.27/warc/CC-MAIN-20220628081102-20220628111102-00674.warc.gz"}
http://mathhelpforum.com/advanced-algebra/190570-matrix-nilpotencies.html
# Math Help - matrix nilpotencies 1. ## matrix nilpotencies Hi everyone. Do you know if any matrix exists with a nilpotency of 0? A nilpotent matrix N is a square matrix defined as N^k=0 for k>=0....but isn't any square matrix to the zero power the identity matrix....so there is no nilpotent matrix with nilpotency 0?? is my thinking correct? 2. ## Re: matrix nilpotencies Originally Posted by cp05 ....so there is no nilpotent matrix with nilpotency 0?? is my thinking correct? Yes, you are right. 3. ## Re: matrix nilpotencies yes. nilpotency is only defined for positive powers of k. A^0 is by definition the identity matrix, for any non-zero matrix A. the zero matrix is a special case, just like the number 0^0 is a special case, although most sources (wolframalpha/mathematica, for example) define it as I (which, oddly enough, is a conflict when one considers 1x1 matrices).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9236567616462708, "perplexity": 1734.813523200859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.clutchprep.com/physics/practice-problems/144689/what-is-the-emf-of-a-battery-that-does-0-60-j-of-work-to-transfer-0-050-c-of-cha
Work From Electric Force Video Lessons Concept # Problem: What is the emf of a battery that does 0.60 J of work to transfer 0.050 C of charge from the negative to the positive terminal? ###### FREE Expert Solution The emf of a battery is given by: $\overline{){\mathbf{\epsilon }}{\mathbf{=}}\frac{\mathbf{W}}{\mathbf{q}}}$ 97% (326 ratings) ###### Problem Details What is the emf of a battery that does 0.60 J of work to transfer 0.050 C of charge from the negative to the positive terminal?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7852786779403687, "perplexity": 955.0866518503908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00208.warc.gz"}
https://www.transtutors.com/questions/for-the-profitability-analysis-compute-mcconnell-8217-s-gross-profit-percentage-and--1260256.htm
# For the profitability analysis, compute McConnell’s gross profit percentage and rate of return For the profitability analysis, compute McConnell’s 1. gross profit percentage and 2. rate of return on net sales. Compare these figures with the industry averages. Is McConnell’s profit performance better or worse than the industry average?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229078054428101, "perplexity": 7255.9411045797615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490107.12/warc/CC-MAIN-20190219122312-20190219144312-00132.warc.gz"}
http://mathhelpforum.com/algebra/49316-how-does-work.html
# Math Help - how does this work? 1. ## how does this work? I know that 1+√5 = 3√(2+√5) = 1.618033989 2 but what is the math behind it specifically a way to simplify 3√(2+√5) to 1+√5 2 Thanks 2. Just to make this clear its (1+√5)/2 which equals 3√(2+√5) I hope that is clear. 3. Hello, Originally Posted by Smoriginal Just to make this clear its (1+√5)/2 which equals 3√(2+√5) I hope that is clear. $\frac{1+\sqrt{5}}{2}=\sqrt[3]{2+\sqrt{5}}$ You can prove it by cubing both sides. This is all I can see for the moment... (note : $(a+b)^3=a^3+b^3+3ab^2+3a^2b$)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5768187046051025, "perplexity": 1997.1784784960928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449258.99/warc/CC-MAIN-20151124205409-00239-ip-10-71-132-137.ec2.internal.warc.gz"}
https://ifwisdomwereteachable.wordpress.com/2013/06/30/the-central-amenability-constant-of-a-finite-group-part-3-of-n/
OK, back to the story of the central amenability constant. I’ll take the opportunity to re-tread some of the ground from the first post. ## 1. Review/recap Given a finite group G, ${{\mathbb C} G}$ denotes the usual complex group algebra: we think of it as the vector space ${{\mathbb C}^G}$ equipped with a suitable multiplication. This has a canonical basis as a vector space, indexed by group elements: we denote the basis vector corresponding to an element x of G by ${\delta_x}$. Thus for any function ${\psi:G\rightarrow{\mathbb C}}$, we have ${\sum_{x\in G} \psi(x)\delta_x}$. (Aside: this is not really the correct “natural” way to think of the group algebra if one generalizes from finite groups to infinite groups; one has to be more careful about whether one is thinking “covariantly or contravariantly”. ${{\mathbb C}^G}$ is naturally a contravariant object as G varies, but the group algebra should be covariant as G varies. However, our approach allows us to view characters on G as elements of the group algebra, which is a very convenient elision.) The centre of ${{\mathbb C} G}$, henceforth denoted by ${{\rm Z\mathbb C} G}$, is commutative and spanned by its minimal idempotents, which are all of the form $\displaystyle p_\phi = \frac{\phi(e)}{|G|}\phi \equiv \frac{\phi(e)}{|G|}\sum_{x\in G} \phi(x)\delta_x$ for some irreducible character ${\phi:G\rightarrow{\mathbb C}}$. Moreover, ${\phi\mapsto p_\phi}$ is a bijection between the set of irreducible characters and the set of minimal idempotents in ${{\rm Z\mathbb C} G}$. We define $\displaystyle {\bf m}_G = \sum_{\phi\in {\rm Irr}(G)} p_\phi \otimes p_\phi \in {\rm Z\mathbb C} G \otimes {\rm Z\mathbb C} G \equiv {\rm Z\mathbb C} (G\times G )$ and, equipping ${{\mathbb C}(G\times G )}$ with the natural ${\ell^1}$-norm, defined by $\displaystyle \Vert f\Vert = \sum_{(x,y) \in G\times G } |f(x,y)|,$ we define ${{\rm AM}_{\rm Z}(G)}$ to be ${\Vert {\bf m}_G \Vert}$. Explicitly, if we use the convention that the value of a class function ${\psi}$ on any element of a conjugacy class C is denoted by ${\psi(C)}$, we have $\displaystyle {\rm AM}_{\rm Z}(G) = \sum_{C,D\in{\rm Conj}(G)} |C|\ |D| \left\vert \sum_{\phi\in {\rm Irr}(G)} \frac{1}{|G|^2} \phi(e)^2\phi(C)\phi(D) \right\vert \;,$ the formula stated in the first post of this series. ## 2. Moving onwards Remark 1 As I am writing these things up, it occurs to me that “philosophically speaking”, perhaps one should regard ${{\bf m}_G}$ as an element of the group algebra ${{\mathbb C}(G\times G^{\rm op})}$, where Gop denotes the group whose underlying set is that of G but equipped with the reverse multiplication. It is easily checked that a function on ${G\times G}$ is central as an element of ${{\mathbb C}(G\times G )}$ if and only if it is central as an element of the algebra ${{\mathbb C}(G\times G^{\rm op})}$, so we can get away with the definition chosen here. Nevertheless, I have a suspicion that the ${G\times G^{\rm op}}$ picture is somehow the “right” one to adopt, if one wants to put the study of ${{\bf m}_G}$ into a wider algebraic context. ${{\bf m}_G}$ is a non-zero idempotent in a Banach algebra, so it follows from submultiplicativity of the norm that ${{\rm AM}_{\rm Z}(G)=\Vert {\bf m}_G \Vert \geq 1}$. When do we have equality? Theorem 2 (Azimifard–Samei–Spronk) ${{\rm AM}_{\rm Z}(G)=1}$ if and only if G is abelian. The proof of necessity (that is, the “only if” direction) will go in the next post. In the remainder of this post, I will give two proofs of sufficiency (that is, the “if” direction). In the paper of Azimifard–Samei–Spronk (MR 2490229; see also arXiv 0805.3685) where I first learned of ${{\rm AM}_{\rm Z}(G)}$, this direction is glossed over quickly, since it follows from more general facts in the theory of amenable Banach algebras. I will return later, in Section 2.2, to an exposition of how this works for the case in hand. First, let us see how we can approach the problem more directly. #### 2.1. Proof of sufficiency: direct version Suppose G is abelian, and let ${n=|G|}$. Then G has exactly n irreducible characters, all of which are linear (i.e. one-dimensional representations, a.k.a. multiplicative functionals). Denoting these characters by ${\phi_1,\dots,\phi_n}$, we have $\displaystyle {\bf m}_G = \sum_{j=1}^n \frac{1}{n}\phi_j \otimes \frac{1}{n}\phi_j$ so that $\displaystyle {\rm AM}_{\rm Z}(G) = \sum_{x,y\in G} \left\vert \sum_{j=1}^n \frac{1}{n^2}\phi_j(x)\phi_j(y)\right\vert$ This sum can be evaluated explicitly using some Fourier analysis — or, in the present context, the Schur column orthogonality relations. To make this a bit more transparent, recall that ${\phi(y^{-1})=\overline{\phi(y)}}$ for all characters ${\phi}$ and all y in G. Hence by a change of variables in the previous equation, we get $\displaystyle {\rm AM}_{\rm Z}(G) = \frac{1}{n^2} \sum_{x,y\in G} \left\vert \sum_{j=1}^n \phi_j(x)\overline{\phi_j(y)} \right\vert$ For a fixed element x in G, the n-tuple ${(\phi_1(x), \dots, \phi_n(x) )}$ is a column in the character table of G. We know by general character theory for finite groups that distinct columns of the character table, viewed as column vectors with complex entries, are orthogonal with respect to the standard inner product. Hence most terms in the expression above vanish, and we are left with \displaystyle \begin{aligned} {\rm AM}_{\rm Z}(G) & = \frac{1}{n^2} \sum_{x\in G} \left\vert \sum_{j=1}^n \phi_j(x)\overline{\phi_j(x)} \right\vert \\ & = \frac{1}{n^2} \sum_{x\in G} \sum_{j=1}^n \vert\phi_j(x)\vert^2 \end{aligned} which equals ${1}$, since each ${\phi_j}$ takes values in ${\mathbb T}$. This completes the proof. #### 2.2. Proof of sufficiency: slick version The following argument is an expanded version of the one that is outlined, or alluded to, in the paper of Azimifard–Samei–Spronk. It is part of the folklore in Banach algebras — for given values of “folk” — but really the argument goes back to the study of “separable algebras” in the sense of ring theory. Lemma 3 Let A be an associative, commutative algebra, with identity element 1A. Let ${\Delta: A\otimes A \rightarrow A}$ be the linear map defined by ${\Delta(a\otimes b)=ab}$. Then there is at most one element m in ${A\otimes A}$ that simultaneously satisfies ${\Delta(m)}$=1A and ${a\cdot m = m\cdot a}$ for all a in A. Proof: Let us first omit the assumption that A is commutative, and work merely with an associative algebra that has an identity. Define the following multiplication on ${A\otimes A}$: $\displaystyle (a\otimes b) \odot (c\otimes d) := ac \otimes db .$ Then ${(A\otimes A, \odot)}$ is an associative algebra — the so-called enveloping algebra of A. If m satisfies the conditions mentioned in the lemma, then $\displaystyle (a\otimes b) \odot m = a\cdot m \cdot b = \Delta(ab)\cdot m \;;$ and so, by taking linear combinations, ${w\odot m = \Delta(w)\cdot m}$ for every w in ${A\otimes A}$. If n is another element of ${A\otimes A}$ satisfying the conditions of the lemma, we therefore have n${\odot}$m=m, and by symmetry, m${\odot}$n=n. Now we use the assumption that A is commutative. From this assumption, we see that ${(A\otimes A,\odot)}$ is also commutative. Therefore $\displaystyle m = n\odot m = m\odot n = n$ as required. $\Box$ Now let G be a finite group and let A= ${{\rm Z\mathbb C} G}$. Because A is spanned by its minimal idempotents ${p_\phi}$, and because minimal idempotents in a commutative algebra are mutually orthogonal, ${{\bf m}_G = \sum_\phi p_\phi \otimes p_\phi}$ satisfies the two conditions mentioned in Lemma 3. On the other hand, if G is abelian, consider $\displaystyle {\bf n}_G := \frac{1}{|G|} \sum_{x\in G} \delta_x \otimes \delta_{x^{-1}} \in {\mathbb C} G = {\rm Z\mathbb C} G = A.$ Clearly $\Delta({\bf n}_G)$=1A, and a direct calculation shows that ${\delta_g\cdot {\bf n}_G = {\bf n}_G\cdot \delta_g}$ for all g in G, so by linearity ${{\bf n}_G}$ also satisfies both conditions mentioned in Lemma 3. Applying the lemma tells us that ${{\bf m}_G= {\bf n}_G}$, and in particular $\displaystyle {\rm AM}_{\rm Z}(G) = \Vert {\bf n}_G \Vert = 1$ as required.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 70, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9764791131019592, "perplexity": 209.67353813799153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463425.63/warc/CC-MAIN-20150226074103-00233-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.ecrts.org/suggestions-for-authors/
# Suggestions for authors Below are suggestions for structuring and writing your paper that are likely to be appreciated by the reviewers. We hope that this will help you satisfy the evaluation criteria of the conference. Note that these are not mandatory rules. Structure of the paper. It is usually a good idea to • clearly formulate, explain and motivate in the introduction the research problem, together with a short paragraph titled “Contributions of the paper”, in which you briefly summarize the innovative technical contributions of the paper. • have a “Related work” section that shows why the addressed problem has not been completely solved before, underlines the key differences between the proposed approach and those that have been published previously (including prior work of your own), and makes clear where you have built on existing results. • spend a paragraph or even a section called “System model” (or something similar) on presenting the system model, describing accurately notation and nomenclature, and discussing the assumptions and limitations of your model. • present mathematical proofs of correctness if your paper contains theoretical work. • add an “Experimental evaluation” and/or “Case study” section to provide evidence of scientific advancement if the paper contains new algorithms, system design or methodology, or applications that improve on existing state-of-the art (or even regarding a completely new field). • add a short paragraph in the conclusions to briefly summarize the main innovative technical contributions of the paper. Notations. Make sure that any notation used is clearly defined and distinct (do not use symbols that can easily be confused with one another). The best place to define terminology and notation is together with the description of the system model. This gives reviewers a single place to refer back to where they can find any symbols they need to look up again. If you have a large amount of notation in your paper, consider providing a table of notation. Make sure you define all notation and acronyms before they are used. Figures. Make sure that all of your figures, diagrams and graphs are legible when printed out in black and white. Avoid the temptation to make your figures the size of a postage stamp or thumb nail in order to fit your content into the page limits and make sure all text is legible and not too small. Ensure the different lines on the graphs are clearly distinguishable by using markers that are obviously different and where necessary using different line types (e.g. dashed, dotted). Experiments. A reader of your paper should be able to reproduce your experiments and obtain the same results. Hence, it is necessary to describe the experimental setup, including details of case study or benchmark data (or where it can be obtained) and how synthetic data (if used) has been generated. If you are reporting statistical data, then make sure you present measures pertaining to the quality of the results obtained, for example confidence intervals, or variance. To aid in the reproducibility of results, consider also making your evaluation code available. Acknowledgments. These instructions as well as some descriptions of our evaluation criteria originate in a list of FAQs started by Giuseppe Lipari for ECRTS’06 with contributions by many others since. The document was heavily edited in 2019 to separate between suggestions and requirements. Comments are closed.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563811779022217, "perplexity": 746.7053765693433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00534.warc.gz"}
http://math.stackexchange.com/users/42344/ilovemath?tab=summary
This account is temporarily suspended for voting irregularities. The suspension period ends on Feb 25 at 22:12. ILoveMath Reputation Top tag Next privilege 5 Rep. Participate in meta 2 10 42 Impact ~135k people reached ### Questions (269) 16 Reflections on math education 10 Number theory fun problem 8 Must the (continuous) image of a null set be null? 7 Formality and mathematics 7 Consider $f_n(x) = \sum_{k=0}^{n} {x^k}$. Does $f_n$ converge pointwise on $[0,1]$? ### Reputation (1) +5 $R[x]$ has a subring isomorphic to $R$. -2 What does $\frac{1}{n}$ converge to? +5 Example of a non-hausdorff space +5 Loop is contractible iff it extends to a map of disk 18 Proving $a^ab^b + a^bb^a \le 1$, given $a + b = 1$ 18 What's the formula to solve summation of logarithms? 8 If $g^2 = e$ for all $g \in G$, then $G$ is abelian 8 Finding the derivative of $y^x = e^y$ 8 how do I prove that $1 > 0$ in an ordered field? ### Tags (113) 106 calculus × 139 26 trigonometry × 16 31 algebra-precalculus × 22 24 integration × 25 31 sequences-and-series × 18 18 number-theory × 8 29 logarithms × 5 18 summation 27 real-analysis × 112 18 contest-math ### Accounts (8) Academia 188 rep 15 Philosophy 113 rep 3 Skeptics 101 rep 1 French Language 101 rep Writers 101 rep
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8950905799865723, "perplexity": 2786.138341567858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145751.1/warc/CC-MAIN-20160205193905-00231-ip-10-236-182-209.ec2.internal.warc.gz"}
https://tex.meta.stackexchange.com/tags/markdown/hot
# Tag Info ### Double backslashes disappear from code Update: This was almost certainly my fault. On December 20, 2013 I moved the TeX (and meta) databases to new homes. I'm not sure how the original problem occurred, but TeX and its meta had a very odd ... • 101 ### The CommonMark diary Reading some feedback on meta.stackexchange.com I got the impression that this community feels quite tense about the upcoming CommonMark migration. I feel that some extra levels of transparency shared ... • 101 ### Double backslashes disappear from code Based on \\ corruption, I've created a new query TeX.SX \\ corruption (based on user ID) which I hope will help anyone to find their own posts affected by this bug! P.S. The database is updated ... • 121k ### The CommonMark diary For users who are curious about how the automated edits look like, and who want to review them for potential issues: visit the profile of the Community user (ID -1), and navigate to 'all actions' &... • 783 Accepted ### Are questions using markdown with LaTeX allowed? Issues using a variety of software - Pandoc included - are on topic here as long as the issue relates to (La)TeX in some way. As mentioned in the TeX - LaTeX Tour page: Ask about... Formats like ... • 572k Accepted ### How to quote a left quote inline? You want two backticks \code\{=1` • 245k Accepted ### Quoting codes in a list leaves a </li> As you can see on the revision page which shows a live rendering of this question, this is fixed now. It's still visible in your question because the version displayed here is saved on submission; ... • 101 ### Is there no way at all to typeset equations using TeX on the TeX SE at all? This issue has been raised several times, and the central concern remains the same. When people need to show what they are seeing from TeX, we all need to see exactly what they see. Using some ... • 245k Accepted ### Problem writing the sequence \@ in inline <code>...</code>-blocks The markdown backtick syntax does not only the formatting like the HTML <code> block, but it also escapes special characters, so a \ or a < or > have no special meaning. Inside a <code&... • 68.2k Accepted ### Something messed with my answers backslashes and newlines The linked question has now been fixed (along with ~11,000 similarly corrupted postings) see Community effort in fixing the double backslashes issue For a summary of how this got fixed in the end. • 680k ### Double backslashes disappear from code This and the linked question ‘double backslash + newline’ collapses to ‘single backslash’ when I hit ‘edit’ Are two manifestations of the same issue, see Community effort in fixing the double ... • 680k ### Double backslashes disappear from code I've looked into it and I couldn't find anything specific. The thing is, we don't ever modify markdown w/o recording history. So when you read about rebakes and other stuff we do, those are affecting ... • 101 Accepted ### Typo in Markdown Help Indeed. This seems like a markdown "typo." However, it is a network-wide issue and should therefore be addressed at that level. There is a similar question related to this posted on the main network ... • 572k Accepted ### Failure to show image that already exists in imgur Here are some of the issues with the markdown provided in the linked question: The use of the HTML <img> tag is incorrect. The correct usage would be <img src="https://i.stack.imgur.com/... • 572k
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106889367103577, "perplexity": 3485.13729350283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00768.warc.gz"}
https://www.groundai.com/project/turbulent-rayleigh-benard-convection-in-spherical-shells/
[ # [ ## Abstract We simulate numerically Boussinesq convection in non-rotating spherical shells for a fluid with a unity Prandtl number and Rayleigh numbers up to . In this geometry, curvature and radial variations of the gravitational acceleration yield asymmetric boundary layers. A systematic parameter study for various radius ratios (from to ) and gravity profiles allows us to explore the dependence of the asymmetry on these parameters. We find that the average plume spacing is comparable between the spherical inner and outer bounding surfaces. An estimate of the average plume separation allows us to accurately predict the boundary layer asymmetry for the various spherical shell configurations explored here. The mean temperature and horizontal velocity profiles are in good agreement with classical Prandtl-Blasius laminar boundary layer profiles, provided the boundary layers are analysed in a dynamical frame that fluctuates with the local and instantaneous boundary layer thicknesses. The scaling properties of the Nusselt and Reynolds numbers are investigated by separating the bulk and boundary layer contributions to the thermal and viscous dissipation rates using numerical models with and a gravity proportional to . We show that our spherical models are consistent with the predictions of Grossmann & Lohse’s (2000) theory and that and scalings are in good agreement with plane layer results. B Turbulent Rayleigh-Bénard convection in spherical shells]Turbulent Rayleigh-Bénard convection in spherical shells T. Gastine, J. Wicht and J. M. Aurnou]Thomas Gastine1,\nsJohannes Wicht,\nsJonathan M. Aurnou 2015 \volume650 \pagerange119–126 énard convection, boundary layers, geophysical and geological flows ## 1 Introduction Thermal convection is ubiquitous in geophysical and astrophysical fluid dynamics and rules, for example, turbulent flows in the interiors of planets and stars. The so-called Rayleigh-Bénard (hereafter RB) convection is probably the simplest paradigm to study heat transport phenomena in these natural systems. In this configuration, convection is driven in a planar fluid layer cooled from above and heated from below (figure 1a). The fluid is confined between two rigid impenetrable walls maintained at constant temperatures. The key issue in RB convection is to understand the turbulent transport mechanisms of heat and momentum across the layer. In particular, how does the heat transport, characterised by the Nusselt number , and the flow amplitude, characterised by the Reynolds number , depend on the various control parameters of the system, namely the Rayleigh number , the Prandtl number and the cartesian aspect ratio ? In general, quantifies the fluid layer width over its height in classical planar or cylindrical RB cells. In spherical shells, we rather employ the ratio of the inner to the outer radius to characterise the geometry of the fluid layer. Laboratory experiments of RB convection are classically performed in rectangular or in cylindrical tanks with planar upper and lower bounding surfaces where the temperature contrast is imposed (see figure 1b). In such a system, the global dynamics are strongly influenced by the flow properties in the thermal and kinematic boundary layers that form in the vicinity of the walls. The characterisation of the structure of these boundary layers is crucial for a better understanding of the transport processes. The marginal stability theory by Malkus (1954) is the earliest boundary layer model and relies on the assumption that the thermal boundary layers adapt their thicknesses to maintain a critical boundary layer Rayleigh number, which implies . Assuming that the boundary layers are sheared, Shraiman & Siggia (1990) later derived a theoretical model that yields scalings of the form and (see also Siggia, 1994). These asymptotic laws were generally consistent with most of the experimental results obtained in the 1990s up to . Within the typical experimental resolution of one percent, simple power laws of the form were found to provide an adequate representation with exponents ranging from 0.28 to 0.31, in relatively good agreement with the Shraiman & Siggia model (e.g. Castaing et al., 1989; Chavanne et al., 1997; Niemela et al., 2000). However, later high-precision experiments by Xu et al. (2000) revealed that the dependence of upon cannot be accurately described by such simple power laws. In particular, the local slope of the function has been found to increase slowly with . The effective exponent of roughly ranges from values close to near to when (e.g. Funfschilling et al., 2005; Cheng et al., 2015). Grossmann & Lohse (2000, 2004) derived a competing theory capable of capturing this complex dynamics (hereafter GL). This scaling theory is built on the assumption of laminar boundary layers of Prandtl-Blasius (PB) type (Prandtl, 1905; Blasius, 1908). According to the GL theory, the flows are classified in four different regimes in the phase space according to the relative contribution of the bulk and boundary layer viscous and thermal dissipation rates. The theory predicts non-power-law behaviours for and in good agreement with the dependence and observed in recent experiments and numerical simulations of RB convection in planar or cylindrical geometry (see for recent reviews Ahlers et al., 2009; Chillà & Schumacher, 2012). Benefiting from the interplay between experiments and direct numerical simulations (DNS), turbulent RB convection in planar and cylindrical cells has received a lot of interest in the past two decades. However, the actual geometry of several fundamental astrophysical and geophysical flows is essentially three-dimensional within concentric spherical upper and lower bounding surfaces under the influence of a radial buoyancy force that strongly depends on radius. The direct applicability of the results derived in the planar geometry to spherical shell convection is thus questionable. As shown in figure 1(c), convection in spherical shells mainly differs from the traditional plane layer configuration because of the introduction of curvature and the absence of side walls. These specific features of thermal convection in spherical shells yield significant dynamical differences with plane layers. For instance, the heat flux conservation through spherical surfaces implies that the temperature gradient is larger at the lower boundary than at the upper one to compensate for the smaller area of the bottom surface. This yields a much larger temperature drop at the inner boundary than at the outer one. In addition, this pronounced asymmetry in the temperature profile is accompanied by a difference between the thicknesses of the inner and the outer thermal boundary layers. Following Malkus’s marginal stability arguments, Jarvis (1993) and Vangelov & Jarvis (1994) hypothesised that the thermal boundary layers in curvilinear geometries adjust their thickness to maintain the same critical boundary layer Rayleigh number at both boundaries. This criterion is however in poor agreement with the results from numerical models (e.g. Deschamps et al., 2010). The exact dependence of the boundary layer asymmetry on the radius ratio and the gravity distribution thus remains an open question in thermal convection in spherical shells (Bercovici et al., 1989; Jarvis et al., 1995; Sotin & Labrosse, 1999; Shahnas et al., 2008; O’Farrell et al., 2013). This open issue sheds some light on the possible dynamical influence of asymmetries between the hot and cold surfaces that originate due to both the boundary curvature and the radial dependence of buoyancy in spherical shells. Ground-based laboratory experiments involving spherical geometry and a radial buoyancy forcing are limited by the fact that gravity is vertically downwards instead of radially inwards (Scanlan et al., 1970; Feldman & Colonius, 2013). A possible way to circumvent this limitation is to conduct experiments under microgravity to suppress the vertically downward buoyancy force. Such an experiment was realised by Hart et al. (1986) who designed a hemispherical shell that flew on board of the space shuttle Challenger in May 1985. The radial buoyancy force was modelled by imposing an electric field across the shell. The temperature dependence of the fluid’s dielectric properties then produced an effective radial gravity that decreases with the fifth power of the radius (i.e. ). More recently, a similar experiment named “GeoFlow” was run on the International Space Station, where much longer flight times are possible (Futterer et al., 2010, 2013). This later experiment was designed to mimic the physical conditions in the Earth mantle. It was therefore mainly dedicated to the observation of plume-like structures in a high Prandtl number regime () for . Unfortunately, this limitation to relatively small Rayleigh numbers makes the GeoFlow experiment quite restricted regarding asymptotic scaling behaviours in spherical shells. To compensate the lack of laboratory experiments, three dimensional numerical models of convection in spherical shells have been developed since the 1980s (e.g. Zebib et al., 1980; Bercovici et al., 1989, 1992; Jarvis et al., 1995; Tilgner, 1996; Tilgner & Busse, 1997; King et al., 2010; Choblet, 2012). The vast majority of the numerical models of non-rotating convection in spherical shells has been developed with Earth’s mantle in mind. These models therefore assume an infinite Prandtl number and most of them further include a strong dependence of viscosity on temperature to mimic the complex rheology of the mantle. Several recent studies of isoviscous convection with infinite Prandtl number in spherical shells have nevertheless been dedicated to the analysis of the scaling properties of the Nusselt number. For instance, Deschamps et al. (2010) measured convective heat transfer in various radius ratios ranging from to and reported for , while Wolstencroft et al. (2009) computed numerical models with Earth’s mantle geometry () up to and found . These studies also checked the possible influence of internal heating and reported quite similar scalings. Most of the numerical models of convection in spherical shells have thus focused on the very specific dynamical regime of the infinite Prandtl number. The most recent attempt to derive the scaling properties of and in non-rotating spherical shells with finite Prandtl numbers is the study of Tilgner (1996). He studied convection in self-graviting spherical shells (i.e. ) with spanning the range and . This study was thus restricted to low Rayleigh numbers, relatively close to the onset of convection, which prevents the derivation of asymptotic scalings for and in spherical shells. The objectives of the present work are twofold: (i) to study the scaling properties of and in spherical shells with finite Prandtl number; (ii) to better characterise the inherent asymmetric boundary layers in thermal convection in spherical shells. We therefore conduct two systematic parameter studies of turbulent RB convection in spherical shells with by means of three dimensional DNS. In the first set of models, we vary both the radius ratio (from to ) and the radial gravity profile (considering ) in a moderate parameter regime (i.e. for the majority of the cases) to study the influence of these properties on the boundary layer asymmetry. We then consider a second set of models with and up to . These DNS are used to check the applicability of the GL theory to thermal convection in spherical shells. We therefore numerically test the different basic prerequisites of the GL theory: we first analyse the nature of the boundary layers before deriving the individual scaling properties for the different contributions to the viscous and thermal dissipation rates. The paper is organised as follows. In § 2, we present the governing equations and the numerical models. We then focus on the asymmetry of the thermal boundary layers in § 3. In § 4, we analyse the nature of the boundary layers and show that the boundary layer profiles are in agreement with the Prandtl-Blasius theory (Prandtl, 1905; Blasius, 1908). In § 5, we investigate the scaling properties of the viscous and thermal dissipation rates before calculating the and scalings in § 6. We conclude with a summary of our findings in § 7. ## 2 Model formulation ### 2.1 Governing hydrodynamical equations We consider RB convection of a Boussinesq fluid contained in a spherical shell of outer radius and inner radius . The boundaries are impermeable, no slip and at constant temperatures and . We adopt a dimensionless formulation using the shell gap as the reference lengthscale and the viscous dissipation time as the reference timescale. Temperature is given in units of , the imposed temperature contrast over the shell. Velocity and pressure are expressed in units of and , respectively. Gravity is non-dimensionalised using its reference value at the outer boundary . The dimensionless equations for the velocity , the pressure and the temperature are given by \boldmath∇⋅\boldmathu% \boldmathu=0, (1) ∂\boldmathu∂t+\boldmathu⋅\boldmath∇\boldmathu=−\boldmath∇p+RaPrgT\boldmather+\boldmathΔ\boldmathu, (2) ∂T∂t+\boldmathu⋅\boldmath∇T=1PrΔT, (3) where is the unit vector in the radial direction and is the gravity. Several gravity profiles have been classically considered to model convection in spherical shells. For instance, self-graviting spherical shells with a constant density correspond to (e.g Tilgner, 1996), while RB convection models with infinite Prandtl number usually assume a constant gravity in the perspective of modelling Earth’s mantle (e.g. Bercovici et al., 1989). The assumption of a centrally-condensed mass has also been frequently assumed when modelling rotating convection (e.g. Gilman & Glatzmaier, 1981; Jones et al., 2011) and yields . Finally, the artificial central force field of the microgravity experiments takes effectively the form of (Hart et al., 1986; Feudel et al., 2011; Futterer et al., 2013). To explore the possible impact of these various radial distribution of buoyancy on RB convection in spherical shells, we consider different models with the four following gravity profiles: . Particular attention will be paid to the cases with , which is the only radial function compatible with an exact analysis of the dissipation rates (see below, § 2.3). The dimensionless set of equations (1-3) is governed by the Rayleigh number , the Prandtl number and the radius ratio of the spherical shell defined by Ra=αgoΔTd3νκ,Pr=νκ,η=riro, (4) where and are the viscous and thermal diffusivities and is the thermal expansivity. ### 2.2 Diagnostic parameters To quantify the impact of the different control parameters on the transport of heat and momentum, we analyse several diagnostic properties. We adopt the following notations regarding different averaging procedures. Overbars correspond to a time average ¯¯¯f=1τ∫t0+τt0fdt, where is the time averaging interval. Spatial averaging over the whole volume of the spherical shell are denoted by triangular brackets , while correspond to an average over a spherical surface: ⟨f⟩=1V∫Vf(r,θ,ϕ)dV;⟨f⟩s=14π∫π0∫2π0f(r,θ,ϕ)sinθdθdϕ, where is the volume of the spherical shell, is the radius, the colatitude and the longitude. The convective heat transport is characterised by the Nusselt number , the ratio of the total heat flux to the heat carried by conduction. In spherical shells with isothermal boundaries, the conductive temperature profile is the solution of ddr(r2dTcdr)=0,Tc(ri)=1,Tc(ro)=0, which yields Tc(r)=η(1−η)21r−η1−η. (5) For the sake of clarity, we adopt in the following the notation for the time-averaged and horizontally-averaged radial temperature profile: ϑ(r)=¯¯¯¯¯¯¯¯¯¯⟨T⟩s. The Nusselt number then reads Nu=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨urT⟩s−1Prdϑdr−1PrdTcdr=−ηdϑdr(r=ri)=−1ηdϑdr(r=ro). (6) The typical rms flow velocity is given by the Reynolds number Re=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯√⟨u2⟩=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯√⟨u2r+u2θ+u2ϕ⟩, (7) while the radial profile for the time and horizontally-averaged horizontal velocity is defined by (8) ### 2.3 Exact dissipation relationships in spherical shells The mean buoyancy power averaged over the whole volume of a spherical shell is expressed by P=RaPr¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨gurT⟩=4πVRaPr∫rorigr2¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨urT⟩sdr, Using the Nusselt number definition (6) and the conductive temperature profile (5) then leads to P=4πVRaPr2(∫rorigr2dϑdrdr−Nuη(1−η)2∫rorigdr). The first term in the parentheses becomes identical to the imposed temperature drop for a gravity : ∫rorigr2dϑdrdr=r2o[ϑ(ro)−ϑ(ri)]=−r2o, and thus yields an analytical relation between and . For any other gravity model, we have to consider the actual spherically-symmetric radial temperature profile . Christensen & Aubert (2006) solve this problem by approximating by the diffusive solution (5) and obtain an approximate relation between and . This motivates our particular focus on the cases which allows us to conduct an exact analysis of the dissipation rates and therefore check the applicability of the GL theory to convection in spherical shells. Noting that for , one finally obtains the exact relation for the viscous dissipation rate : ϵU=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨(\boldmath∇×\boldmathu\boldmathu)2⟩=P=31+η+η2RaPr2(Nu−1). (9) The thermal dissipation rate can be obtained by multiplying the temperature equation (3) by and integrate it over the whole volume of the spherical shell. This yields ϵT=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⟨(∇T)2⟩=3η1+η+η2Nu. (10) These two exact relations (9-10) can be used to validate the spatial resolutions of the numerical models with . To do so, we introduce and , the ratios of the two sides of Eqs (9-10): χϵU =(1+η+η2)Pr23Ra(Nu−1)ϵU, (11) χϵT =(1+η+η2)3ηNuϵT. ### 2.4 Setting up a parameter study #### Numerical technique The numerical simulations have been carried out with the magnetohydrodynamics code MagIC (Wicht, 2002). MagIC has been validated via several benchmark tests for convection and dynamo action (Christensen et al., 2001; Jones et al., 2011). To solve the system of equations (1-3), the solenoidal velocity field is decomposed into a poloidal and a toroidal contribution \boldmathu\boldmathu=\boldmath∇×(\boldmath∇×W\boldmather)+\boldmath∇×Z%\boldmath$er$, where and are the poloidal and toroidal potentials. , , and are then expanded in spherical harmonic functions up to degree in the angular variables and and in Chebyshev polynomials up to degree in the radial direction. The combined equations governing and are obtained by taking the radial component and the horizontal part of the divergence of (2). The equation for is obtained by taking the radial component of the curl of (2). The equations are time-stepped by advancing the nonlinear terms using an explicit second-order Adams-Bashforth scheme, while the linear terms are time-advanced using an implicit Crank-Nicolson algorithm. At each time step, all the nonlinear products are calculated in the physical space (, , ) and transformed back into the spectral space (, , ). For more detailed descriptions of the numerical method and the associated spectral transforms, the reader is referred to (Gilman & Glatzmaier, 1981; Tilgner & Busse, 1997; Christensen & Wicht, 2007). #### Parameter choices One of the main focuses of this study is to investigate the global scaling properties of RB convection in spherical shell geometries. This is achieved via measurements of the Nusselt and Reynolds numbers. In particular, we aim to test the applicability of the GL theory to spherical shells. As demonstrated before, only the particular choice of a gravity profile of the form allows the exactness of the relation (9). Our main set of simulations is thus built assuming . The radius ratio is kept to and the Prandtl number to to allow future comparisons with the rotating convection models by Gastine & Wicht (2012) and Gastine et al. (2013) who adopted the same configuration. We consider 35 numerical cases spanning the range . Table 1 summarises the main diagnostic quantities for this dataset of numerical simulations and shows that our models basically lie within the ranges and . Another important issue in convection in spherical shells concerns the determination of the average bulk temperature and the possible boundary layer asymmetry between the inner and the outer boundaries (e.g. Jarvis, 1993; Tilgner, 1996). To better understand the influence of curvature and the radial distribution of buoyancy, we thus compute a second set of numerical models. This additional dataset consists of 113 additional simulations with various radius ratios and gravity profiles, spanning the range with . To limit the numerical cost of this second dataset, these cases have been run at moderate Rayleigh number and typically span the range for the majority of the cases. Table 2, given in the Appendix, summarises the main diagnostic quantities for this second dataset of numerical simulations. #### Resolution checks Attention must be paid to the numerical resolutions of the DNS of RB convection (e.g. Shishkina et al., 2010). Especially, underresolving the fine structure of the turbulent flow leads to an overestimate of the Nusselt number, which then falsifies the possible scaling analysis (Amati et al., 2005). One of the most reliable ways to validate the truncations employed in our numerical models consists of comparing the obtained viscous and thermal dissipation rates with the average Nusselt number (Stevens et al., 2010; Lakkaraju et al., 2012; King et al., 2012). The ratios and , defined in (11), are found to be very close to unity for all the cases of Table 1, which supports the adequacy of the employed numerical resolutions. To further highlight the possible impact of inadequate spatial resolutions, two underresolved numerical models for the two highest Rayleigh numbers have also been included in Table 1 (lines in italics). Because of the insufficient number of grid points in the boundary layers, the viscous dissipation rates are significantly higher than expected in the statistically stationary state. This leads to overestimated Nusselt numbers by similar percent differences (). Table 1 shows that the typical resolutions span the range from () to (). The two highest Rayleigh numbers have been computed assuming a two-fold azimuthal symmetry to ease the numerical computations. A comparison of test runs with or without the two-fold azimuthal symmetry at lower Rayleigh numbers () showed no significant statistical differences. This enforced symmetry is thus not considered to be influential. The total computational time for the two datasets of numerical models represents roughly 5 million Intel Ivy Bridge CPU hours. ## 3 Asymmetric boundary layers in spherical shells ### 3.1 Definitions Several different approaches are traditionally considered to define the thermal boundary layer thickness . They either rely on the horizontally-averaged mean radial temperature profile or on the temperature fluctuation defined as (12) Among the possible estimates based on , the slope method (e.g. Verzicco & Camussi, 1999; Breuer et al., 2004; Liu & Ecke, 2011) defines as the depth where the linear fit to near the boundaries intersects the linear fit to the temperature profile at mid-depth. Alternatively, exhibits sharp local maxima close to the walls. The radial distance separating those peaks from the corresponding nearest boundary can be used to define the thermal boundary layer thicknesses (e.g. Tilgner, 1996; King et al., 2013). Figure 2(a) shows that both definitions of actually yield nearly indistinguishable boundary layer thicknesses. We therefore adopt the slope method to define the thermal boundary layers. There are also several ways to define the viscous boundary layers. Figure 2(b) shows the vertical profile of the root-mean-square horizontal velocity . This profile exhibits strong increases close to the boundaries that are accompanied by well-defined peaks. Following Tilgner (1996) and Kerr & Herring (2000), the first way to define the kinematic boundary layer is thus to measure the distance between the walls and these local maxima. This commonly-used definition gives () for the inner (outer) spherical boundary. Another possible method to estimate the viscous boundary layer follows a similar strategy as the slope method that we adopted for the thermal boundary layers (Breuer et al., 2004). () is defined as the distance from the inner (outer) wall where the linear fit to near the inner (outer) boundary intersects the horizontal line passing through the maximum horizontal velocity. Figure 2(b) reveals that these two definitions lead to very distinct viscous boundary layer thicknesses. In particular, the definition based on the position of the local maxima of yields much thicker boundary layers than the tangent intersection method, i.e. . The discrepancies of these two definitions are further discussed in § 4. ### 3.2 Asymmetric thermal boundary layers and mean bulk temperature Figure 2 also reveals a pronounced asymmetry in the mean temperature profiles with a much larger temperature drop at the inner boundary than at the outer boundary. As a consequence, the mean temperature of the spherical shell is much below . Determining how the mean temperature depends on the radius ratio has been an ongoing open question in mantle convection studies with infinite Prandtl number (e.g. Bercovici et al., 1989; Jarvis, 1993; Vangelov & Jarvis, 1994; Jarvis et al., 1995; Sotin & Labrosse, 1999; Shahnas et al., 2008; Deschamps et al., 2010; O’Farrell et al., 2013). To analyse this issue in numerical models with , we have performed a systematic parameter study varying both the radius ratio of the spherical shell and the gravity profile (see Table 2). Figure 3 shows some selected radial profiles of the mean temperature for various radius ratios (panel a) and gravity profiles (panel b) for cases with similar . For small values of , the large difference between the inner and the outer surfaces lead to a strong asymmetry in the temperature distribution: nearly 90% of the total temperature drop occurs at the inner boundary when . In thinner spherical shells, the mean temperature gradually approaches a more symmetric distribution to finally reach when (no curvature). Figure 3(b) also reveals that a change in the gravity profile has a direct impact on the mean temperature profile. This shows that both the shell geometry and the radial distribution of buoyancy affect the temperature of the fluid bulk in RB convection in spherical shells. To analytically access the asymmetries in thickness and temperature drop observed in figure 3, we first assume that the heat is purely transported by conduction in the thin thermal boundary layers. The heat flux conservation through spherical surfaces (6) then yields ΔToλoT=η2ΔTiλiT, (13) where the thermal boundary layers are assumed to correspond to a linear conduction profile with a temperature drop () over a thickness (). As shown in Figs. 2-3, the fluid bulk is isothermal and forms the majority of the fluid by volume. We can thus further assume that the temperature drops occur only in the thin boundary layers, which leads to ΔTo+ΔTi=1. (14) Equations (13) and (14) are nevertheless not sufficient to determine the three unknowns , , and an additional physical assumption is required. A hypothesis frequently used in mantle convection models with infinite Prandtl number in spherical geometry (Jarvis, 1993; Vangelov & Jarvis, 1994) is to further assume that both thermal boundary layers are marginally stable such that the local boundary layer Rayleigh numbers and are equal: Raiλ=Raoλ→αgiΔTiλiT3νκ=αgoΔToλoT3νκ. (15) This means that both thermal boundary layers adjust their thickness and temperature drop to yield (e.g., Malkus, 1954). The temperature drops at both boundaries and the ratio of the thermal boundary layer thicknesses can then be derived using Eqs. (13-14) ΔTi=11+η3/2χ1/4g,ΔTo≃Tm=η3/2χ1/4g1+η3/2χ1/4g,λoTλiT=χ1/4gη1/2, (16) where χg=g(ri)g(ro), (17) is the ratio of the gravitational acceleration between the inner and the outer boundaries. Figure 4(a) reveals that the marginal stability hypothesis is not fulfilled when different radius ratios and gravity profiles are considered. This is particularly obvious for small radius ratios where is more than 10 times larger than . This discrepancy tends to vanish when , when curvature and gravity variations become unimportant. As a consequence, there is a significant mismatch between the predicted mean bulk temperature from (16) and the actual values (figure 4b). Deschamps et al. (2010) also reported a similar deviation from (16) in their spherical shell models with infinite Prandtl number. They suggest that assuming instead might help to improve the agreement with the data. This however cannot account for the additional dependence on the gravity profile visible in figure 4. We finally note that for the database of numerical simulations explored here, which suggests that the thermal boundary layers are stable in all our simulations. Alternatively Wu & Libchaber (1991) and Zhang et al. (1997) proposed that the thermal boundary layers adapt their thicknesses such that the mean hot and cold temperature fluctuations at mid-depth are equal. Their experiments with Helium indeed revealed that the statistical distribution of the temperature at mid-depth was symmetrical. They further assumed that the thermal fluctuations in the center can be identified with the boundary layer temperature scales and , which characterise the temperature scale of the thermal boundary layers in a different way than the relative temperature drops and . This second hypothesis yields θi=θo→νκαgiλiT3=νκαgoλoT3, (18) and the corresponding temperature drops and boundary layer thicknesses ratio ΔTi=11+η2χ1/3g,ΔTo=Tm=η2χ1/3g1+η2χ1/3g,λoTλiT=χ1/3g. (19) Figure 5(a) shows for different radius ratios and gravity profiles, while figure 5(b) shows a comparison between the predicted mean bulk temperature and the actual values. Besides the cases with which are in relatively good agreement with the predicted scalings, the identity of the boundary layer temperature scales is in general not fulfilled for the other gravity profiles. The actual mean bulk temperature is thus poorly described by (19). We note that previous findings by Ahlers et al. (2006) already reported that the theory by Wu & Libchaber’s does also not hold when the transport properties depend on temperature (i.e. non-Oberbeck-Boussinesq convection). ### 3.3 Conservation of the average plume density in spherical shells As demonstrated in the previous section, none of the hypotheses classically employed accurately account for the temperature drops and the boundary layer asymmetry observed in spherical shells. We must therefore find a dynamical quantity that could be possibly identified between the two boundary layers. Figure 6 shows visualisations of the thermal boundary layers for three selected numerical models with different radius ratios and gravity profiles. The isocontours displayed in panels (a-c) reveal the intricate plume structure. Long and thin sheet-like structures form the main network of plumes. During their migration along the spherical surfaces, these sheet-like plumes can collide and convolute with each other to give rise to mushroom-type plumes (see Zhou & Xia, 2010b; Chillà & Schumacher, 2012). During this morphological evolution, mushroom-type plumes acquire a strong radial vorticity component. These mushroom-type plumes are particularly visible at the connection points of the sheet plumes network at the inner thermal boundary layer (red isosurface in figure 6a-c). Figure 6(d-f) shows the corresponding equatorial and radial cuts of the temperature fluctuation . These panels further highlight the plume asymmetry between the inner and the outer thermal boundary layers. For instance, the case with and (top panels) features an outer boundary layer approximately 4.5 times thicker than the inner one. Accordingly, the mushroom-like plumes that depart from the outer boundary layer are significantly thicker than the ones emitted from the inner boundary. This discrepancy tends to vanish in the thin shell case (, bottom panels) in which curvature and gravity variations play a less significant role ( in that case). Puthenveettil & Arakeri (2005) and Zhou & Xia (2010b) performed statistical analysis of the geometrical properties of thermal plumes in experimental RB convection. By tracking a large number of plumes, their analysis revealed that both the plume separation and the width of the sheet-like plumes follow a log-normal probability density function (PDF). To further assess how the average plume properties of the inner and outer thermal boundary layers compare with each other in spherical geometry, we adopt a simpler strategy by only focussing on the statistics of the plume density. The plume density per surface unit at a given radius is expressed by ρp∼N4πr2, (20) where is the number of plumes, approximated here by the ratio of the spherical surface area to the mean inter-plume area : N∼4πr2¯S. (21) This inter-plume area can be further related to the average plume separation via . An accurate evaluation of the inter-plume area for each thermal boundary layer however requires to separate the plumes from the background fluid. Most of the criteria employed to determine the location of the plume boundaries are based on thresholds of certain physical quantities (see Shishkina & Wagner, 2008, for a review of the different plume extraction techniques). This encompasses threshold values on the temperature fluctuations (Zhou & Xia, 2002), on the vertical velocity (Ching et al., 2004) or on the thermal dissipation rate (Shishkina & Wagner, 2005). The choice of the threshold value however remains an open problem. Alternatively, Vipin & Puthenveettil (2013) show that the sign of the horizontal divergence of the velocity might provide a simple and threshold-free criterion to separate the plumes from the background fluid \boldmath∇H⋅\boldmathu=1rsinθ∂∂θ(sinθuθ)+1rsinθ∂uϕ∂ϕ=−1r2∂∂r(r2ur). Fluid regions with indeed correspond to local regions of positive vertical acceleration, expected inside the plumes, while the fluid regions with characterise the inter-plume area. To analyse the statistics of , we thus consider here several criteria based either on a threshold value of the temperature fluctuations or on the sign of the horizontal divergence. This means that a given inter-plume area at the inner (outer) thermal boundary layer is either defined as an enclosed region surrounded by hot (cold) sheet-like plumes carrying a temperature perturbation ; or by an enclosed region with . To further estimate the possible impact of the chosen threshold value on , we vary between and . This yields S(r)≡r2∮Tsinθdθdϕ, (22) where the physical criterion () to extract the plume boundaries at the inner (outer) boundary layer is given by Ti={T′(riλ,θ,ϕ)≤t,t∈[σ(riλ),σ(riλ/2),σ(riλ/4)],\boldmath∇H⋅\boldmathu≥0, (23) To={T′(roλ,θ,ϕ)≥t,t∈[σ(riλ),σ(riλ/2),σ(riλ/4)],\boldmath∇H⋅\boldmathu≥0, where () for the inner (outer) thermal boundary layer. Figure 7 shows an example of such a characterisation procedure for the inner thermal boundary layer of a numerical model with , , . Panel (b) illustrates a plume extraction process when using to determine the location of the plumes: the black area correspond to the inter-plume spacing while the white area correspond to the complementary plume network location. The fainter emerging sheet-like plumes are filtered out and only the remaining “skeleton” of the plume network is selected by this extraction process. The choice of is however arbitrary and can influence the evaluation of the number of plumes. The insets displayed in panels (c-e) illustrate the sensitivity of the plume extraction process on the criterion employed to detect the plumes. In particular, using the threshold based on the largest temperature fluctuations can lead to the fragmentation of the detected plume lanes into several isolated smaller regions. As a consequence, several neighbouring inter-plume areas can possibly be artificially connected when using this criterion. In contrast, using the sign of the horizontal divergence to estimate the plumes location yields much broader sheet-like plumes. As visible on panel (e), the plume boundaries frequently correspond to local maxima of the thermal dissipation rate (Shishkina & Wagner, 2008). For each criterion given in (23), we then calculate the area of each bounded black surface visible in figure 7(b) to construct the statistical distribution of the inter-plume area for both thermal boundary layers. Figure 8 compares the resulting PDFs obtained by combining several snapshots for a numerical model with , and . Besides the criterion which yields PDFs that are slightly shifted towards smaller inter-plume spacing areas, the statistical distributions are found to be relatively insensitive to the detection criterion (23). We therefore restrict the following comparison to the criterion only. Figure 9 shows the PDFs for the three numerical models of figure 6. For the two cases with and (panels b-c), the statistical distributions for both thermal boundary layers nearly overlap. This means that the inter-plume area is similar at both spherical shell surfaces. In contrast, for the case with (panel a), the two PDFs are offset relative to each other. However, the peaks of the distributions remain relatively close, meaning that once again the inner and the outer thermal boundary layers share a similar average inter-plume area. Puthenveettil & Arakeri (2005) and Zhou & Xia (2010b) demonstrated that the thermal plume statistics in turbulent RB convection follow a log-normal distribution (see also Shishkina & Wagner, 2008; Puthenveettil et al., 2011). The large number of plumes in the cases with and (figure 6b-c) would allow a characterisation of the nature of the statistical distributions. However, this would be much more difficult in the case (figure 6a) in which the plume density is significantly weaker. As a consequence, no further attempt has been made to characterise the exact nature of the PDFs visible in figure 9, although the universality of the log-normal statistics reported by Puthenveettil & Arakeri (2005) and Zhou & Xia (2010b) likely indicates that the same statistical distribution should hold here too. The inter-plume area statistics therefore reveals that the inner and the outer thermal boundary layers exhibit a similar average plume density, independently of the spherical shell geometry and the gravity profile. Assuming would allow us to close the system of equations (13-14) and thus finally estimate , and . This however requires us to determine an analytical expression of the average inter-plume area or equivalently of the mean plume separation that depends on the boundary layer thickness and the temperature drop. Using the boundary layer equations for natural convection (Rotem & Claassen, 1969), Puthenveettil et al. (2011) demonstrated that the thermal boundary layer thickness follows λi,oT(x)∼x(Rai,ox)1/5, (24) where is the distance along the horizontal direction and is a Rayleigh number based on the lengthscale and on the boundary layer temperature jumps . As shown on figure 10, using (Puthenveettil & Arakeri, 2005; Puthenveettil et al., 2011) then allows to establish the following relation for the average plume spacing λT¯ℓ∼1Ra1/5ℓ. (25) which yields ¯ℓi∼√αgiΔTiλiT5νκ,¯ℓo∼√αgoΔToλoT5νκ, (26) for both thermal boundary layers. We note that an equivalent expression for the average plume spacing can be derived from a simple mechanical description of the equilibrium between production and coalescence of plumes in each boundary layer (see Parmentier & Sotin, 2000; King et al., 2013). Equation (26) is however expected to be only valid at the scaling level. The vertical lines in figure 9 therefore correspond to the estimated average inter-plume area for both thermal boundary layers using (26) and . The predicted average inter-plume area is in good agreement with the peaks of the statistical distributions for the three cases discussed here. The expression (26) therefore provides a reasonable estimate of the average plume separation (Puthenveettil & Arakeri, 2005; Puthenveettil et al., 2011; Gunasegarane & Puthenveettil, 2014). The comparable observed plume density at both thermal boundary layers thus yields ρip=ρop→αgiΔTiλiT5νκ=αgoΔToλoT5νκ. (27) Using Eqs. (13-14) then allows us to finally estimate the temperature jumps and the ratio of the thermal boundary layer thicknesses in our dimensionless units: ΔTi=11+η5/3χ1/6g,ΔTo=Tm=η5/3χ1/6g1+η5/3χ1/6g,λoTλiT=χ1/6gη1/3. (28) Figure 11 shows the ratios , and the temperature jumps and . In contrast to the previous criteria, either coming from the marginal stability of the boundary layer (16, figure 4) or from the identity of the temperature fluctuations at mid-shell (28, figure 5), the ratio of the average plume separation now falls much closer to the unity line. Some deviations are nevertheless still visible for spherical shells with and (orange circles). The comparable average plume density between both boundary layers allows us to accurately predict the asymmetry of the thermal boundary layers and the corresponding temperature drops for the vast majority of the numerical cases explored here (solid lines in panels b-d). As we consider a fluid with , the viscous boundary layers should show a comparable degree of asymmetry as the thermal boundary layers. (28) thus implies λoUλiU=λoTλiT=χ1/6gη1/3. (29) Figure 12 shows the ratio of the viscous boundary layer thicknesses for the different setups explored in this study. The observed asymmetry between the two spherical shell surfaces is in a good agreement with (29) (solid black lines). ### 3.4 Thermal boundary layer scalings Using (28) and the definition of the Nusselt number (6), we can derive the following scaling relations for the thermal boundary layer thicknesses: λiT=η1+η5/3χ1/6g1Nu,λoT=η2/3χ1/6g1+
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9637791514396667, "perplexity": 967.6427979233613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668004.65/warc/CC-MAIN-20191114053752-20191114081752-00214.warc.gz"}
http://en.wikibooks.org/wiki/GLPK/Steps_of_GMPL_File_Processing
# GLPK/Steps of GMPL File Processing This page describes the steps that GLPK takes when processing a model written in GMPL (also known as MathProg). ## Processing steps GLPK solves a model in a series of steps. ### Model section translation The model section of the GMPL file is parsed and internal structures describing the different objects, such as variables, constraints, and expressions, are created. This phase is executed by the function glp_mpl_read_model. ### Data section translation The data section of the model file is used to initialize parameters and sets. If the data section is contained within the model file, this phase is executed by the function glp_mpl_read_model. If data files are (optionally) provided, this phase is executed by the function glp_mpl_read_data. ### Model generation In this phase, the statements and expressions of the model up to the GMPL solve statement, are evaluated. This phase is executed by the function glp_mpl_generate. Model generation can be computationally expensive and users should expect the process of generating large or complex models to take time. The constraints themselves are normalized according to the following rules: Original form Standard form $a x + b \ge c y + d$ $a x - c y \ge d - b$ $a x + b \le c y + d$ $a x - c y \le d - b$ $a x + b = c y + d\,\!$ $a x - c y = d - b\,\!$ $c \le a x + b \le d$ $c - b \le a x \le d - b$ Dual values are calculated using the standard form of the constraint. This implies that, in the following example, the dual value of c2 will take the opposite sign relative to the dual value of c1 for the otherwise equivalent constraint formulations: s.t. c1 : 3 * z = 1; s.t. c2 : 1 = 3 * z; Care is therefore required when interpreting the dual values for nonstandard constraints. Conversely, good practice suggests using the standard forms where practical. ### Model building The problem instance for the solver is created. This phase is executed by the function glp_mpl_build_prob. This call will fail if not preceded by a call to glp_mpl_generate. ### Solution A solution is attempted by calling the appropriate solver: simplex, interior-point, or MIP. ### Postsolving The results of the solver call are transferred back to the GMPL variables and constraints. All statements after the solve statement in the model file are executed. This phase is executed by function glp_mpl_postsolve. ## Further study More details can be obtained by examining: • function glp_main in implementation file src/glpapi19.c (as of GLPK 4.45) • the example in chapter 3.2 Routines for processing MathProg models in doc/glpk.pdf from the source distribution. ## Prescribed starts It is not possible to specify a feasible (but possibly suboptimal) starting solution when using GLPSOL — but this feature is supported when programming with the GLPK API using the callback hook of the branch-and-cut algorithm.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5638391375541687, "perplexity": 1989.1553620455777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663218.28/warc/CC-MAIN-20140930004103-00133-ip-10-234-18-248.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/35378/laser295?tab=summary
# laser295 Unregistered less info reputation 5 bio website location age member for 1 year, 8 months seen Aug 2 '12 at 13:26 profile views 12 # 7 Questions 4 If $p$ is a prime and $x,y \in \mathbb{Z}$, then $(x+y)^p \equiv x^p+y^p \pmod{p}$ 3 Is this function bijective? 3 Prove that there exists a natural number n for which $11\mid (2^{n} - 1)$ 2 How many equivalence classes does $R$ have? 1 Is this a function? # 83 Reputation +5 Is this a function? +10 How many equivalence classes does $R$ have? +15 Is this function bijective? +5 Prove that there exists a natural number n for which $11\mid (2^{n} - 1)$ This user has not answered any questions # 5 Tags 0 proof-strategy × 5 0 discrete-mathematics 0 elementary-number-theory × 3 0 functions 0 modular-arithmetic # 1 Account Mathematics 83 rep 5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3520164489746094, "perplexity": 1036.8943755443945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642519/warc/CC-MAIN-20140305060722-00052-ip-10-183-142-35.ec2.internal.warc.gz"}
https://berkeley-cs61as.github.io/textbook/how-recursion-works.html
# How Recursion Works ## Breaking Down Recursion Let's see how recursion can magically find the factorial of any number. We've replicated the code below: (define (factorial n) (if (= n 0) 1 (* n (factorial (- n 1))))) factorial returns 1 when n is 0, otherwise it returns the product of n and the factorial of n - 1. Every recursive procedure uses conditionals, and will need two cases: • Base case: This case ends the recursion. Any input to a recursive procedure will eventually reach the base case. • Recursive case: This case reduces the size of the problem. The recursive case will always try to make the problem smaller until it reaches the base case. There can be more than one base case or recursive case in a recursive procedure, but there must be at least one of each in order for any procedure to be correct and recursive. There is one base case and one recursive case in our factorial procedure. Can you identify them? The case in which n is 0 is the base case of factorial. Consider this alternate definition of factorial, which has no base case: (define (factorial n) (* n (factorial (- n 1)))) What is wrong with this alternate definition? The second case in which we call factorial within itself is the recursive case. Notice that the recursive call solves a smaller problem (i.e., (factorial (- n 1))) than the one we were originally given. Consider this alternate definition of factorial: (define (factorial n) (if (= n 0) 1 (factorial n))) What's wrong with this alternate definition? Which of the following statements must hold for every recursive procedure you write? Choose all that apply. ## Leap of Faith At this point, you may still be wondering how a function can be defined in terms of itself. If you use factorial in the middle of defining factorial, shouldn't you get an error saying that factorial isn't defined yet? In order to make it work, you have to believe that it works. This is, in a sense, a leap of faith. The leap of faith is actually a technique for writing recursive procedures. We must imagine that the procedure you are writing already works for any problem smaller than the one you are currently tackling. Thus, while you are thinking about how to compute (factorial 5), imagine that (factorial 4) has already been solved. This will keep your own thoughts from getting stuck in an infinite loop. Back in Lesson 0-2, we stated an important property of defining procedures, where the procedure body is not evaluated when it is definted. This is the technical reason why recursion can work. Thus, define is a special form that does not evaluate its arguments and keeps the procedure body from being evaluated. The body is only evaluated when you call the procedure outside of the definition. Which of these expressions cause an error in Racket? Select all that apply. Enter each expression into the Racket interpreter and see what happens. ## factorial Revisited Let's take a look at the definition of factorial again. (define (factorial n) (if (= n 0) 1 (* n (factorial (- n 1))))) If we would like to evaluate (factorial 6), then we reach the else case of the if statement and reduce the problem to (* 6 (factorial 5)). To simplify this further, we'll need to evaluate (factorial 5). Thus, we get (* 5 (factorial 4)). If we substitute this into the original expression, we get (* 6 (* 5 (factorial 4))). A few more recursive calls later, we'll get something like this: (factorial 6) (* 6 (factorial 5)) (* 6 (* 5 (factorial 4))) (* 6 (* 5 (* 4 (factorial 3)))) (* 6 (* 5 (* 4 (* 3 (factorial 2))))) (* 6 (* 5 (* 4 (* 3 (* 2 (factorial 1)))))) (* 6 (* 5 (* 4 (* 3 (* 2 (* 1 (factorial 0))))))) What should we do with (factorial 0)? This is the base case, and we should just return 1. Thus, we get this expression: (* 6 (* 5 (* 4 (* 3 (* 2 (* 1 1)))))) This is simply a series of nested multiplication expressions, which we can simplify easily, from inside out: (* 6 (* 5 (* 4 (* 3 (* 2 1))))) (* 6 (* 5 (* 4 (* 3 2)))) (* 6 (* 5 (* 4 6))) (* 6 (* 5 24)) (* 6 120) 720 In Racket, there is a very useful procedure called trace, which takes a procedure as an argument and returns the process of the procedure when the procedure is invoked. In your Racket interpreter, type (trace factorial) after defining the factorial procedure, then call (factorial 6). What do you see? If you no longer want to trace the procedure, simply type (untrace factorial). ## Example: Fibonacci Numbers Consider computing the sequence of Fibonacci numbers, in which each number is the sum of the preceding two: \begin{align} 0, 1, 1, 2, 3, 5, 8, 13, 21 \end{align} In general, the Fibonacci numbers can be defined by the following rule: \begin{align} Fib(n) = \begin{cases} 0, & \text{if n = 0} \\ 1, & \text{if n = 1} \\ Fib(n - 1) + Fib(n - 2), & \text{otherwise} \end{cases} \end{align} We can immediately translate this definition into a recursive procedure for computing Fibonacci numbers: (define (fib n) (cond ((= n 0) 0) ((= n 1) 1) (else (+ (fib (- n 1)) (fib (- n 2)))))) Consider what happens when we call (fib 2). The procedure makes two recursive calls (fib 1) and (fib 0), which return 1 and 0 respectively. These numbers are added together, and the procedure returns 1. You may be wondering if it's really necessary to have two separate base cases. Consider what would happen if we left out the base case for when n is 1. (fib 1) would call (+ (fib 0) (fib -1)). (fib 0) would return 0, but (fib -1) would never reach a base case, and the procedure would loop indefinitely. ## Example: Pig Latin You may be familiar with Pig Latin, which is a language game where words in English are altered according to a simple set of rules: take the first consonant (or consonant cluster) of an English word and move it to the end of the word and append "ay" to the word. For example, "pig" yields "igpay", "trash" yields "ashtray", and "object" yields "objectay". We can write Pig Latin in Racket using recursion and helper procedure: (define (pigl wd) (if (pl-done? wd) (word wd 'ay) (pigl (word (bf wd) (first wd))))) (define (pl-done? wd) (vowel? (first wd))) (define (vowel? letter) (member? letter '(a e i o u))) As a reminder, member? is a Racket primitive procedure that takes two arguments, a letter and a word and returns true if the letter is in the word. Pig Latin is done when a vowel is found, so the base case is when pl-done? returns true, and it just concatenates "ay" at the end of the word. Otherwise, in the recursive case, it calls itself with the concatenation of the butfirst of the word and the first of word. Think about what happens if the word contains no vowels. Use your Racket interpreter to try out this implementation of pigl. Don't forget to take advantage of the trace procedure! ## Example: sum-sent Suppose we have a sentence of numbers, such as the one below: (define sent '(1 2 3 4 5)) We want to define a procedure called sum-sent that can find the sum of all the numbers in sent, but we also want sum-sent to be able to find the sum of any sentence of numbers. Since the output depends on the size of the input sentence, we will have to use recursion! Let's take the leap of faith. Imagine that sum-sent already knows how to calculate the sentence containing all but the first number, e.g, '(2 3 4 5). To find this, we would simply call (sum-sent (bf sent)), and we should have faith that it will give us the correct sum. Given that, we know that: (sum-sent '(1 2 3 4 5)) ==> (+ 1 (sum-sent '(2 3 4 5))) If we generalize this for any sentence of numbers, this gives us our recursive case: (+ (first sent) (sum-sent (bf sent))) What happens when we stop here and define sum-sent as follows? (define (sum-sent sent) (+ (first sent) (sum-sent (bf sent)))) We're missing the base case! To solve this problem, we must add a case that will handle the empty sentence. The predicate empty? can be used to check for the empty sentence. Here is the completed version of sum-sent: (define (sum-sent sent) (if (empty? sent) 0 (+ (first sent) (sum-sent (bf sent))))) Suppose we have a sentence of negative numbers, '(-1 -3 -4 -6). What will Racket output? Run through this example using the code for sum-sent above without typing it into the interpeter. Then, use the interpreter to check your work. Feel free to try out more examples with sum-sent in the Racket interpreter. If the recursion is confusing, try looking at what trace outputs. ## Exercises Test Your Understanding: count-ums When you teach a class, people will get distracted if you say "um" too many times. Write a procedure called count-ums that takes in a sentence of words as its arguments and counts the number of times "um" appears in that sentence: -> (count-ums '(today um we are going to um talk about the um combining method)) 3 Write count-ums recursively. Hint #1: What should happen when the sentence is empty? Hint #2: What should happen when the first word of the sentence is "um"? Hint #3: What should happen when the first word of the sentence is NOT "um"? Test Your Understanding: countdown Write a procedure called countdown that takes in a number and works as follows: -> (countdown 10) '(10 9 8 7 6 5 4 3 2 1 blastoff!) -> (countdown 3) '(3 2 1 blastoff!) -> (countdown 1) '(1 blastoff!) -> (countdown 0) 'blastoff! 0.3 - Recursion and Racket
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7368831634521484, "perplexity": 1468.030790901076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948531226.26/warc/CC-MAIN-20171213221219-20171214001219-00271.warc.gz"}
http://stackoverflow.com/questions/14218882/a-number-as-its-prime-number-parts?answertab=active
# A number as it's prime number parts I have to print the number of ways you can represent a given number as it's prime number parts. Let me clarify: Let's say I have been given this number 7. Now, first of all, I have to find all the prime numbers that are less than 7, which are 2, 3 and 5. Now, in how many ways can I summarize those numbers (I can use one number as many times I want) so that the result equals 7? For example, number 7 has five ways: 2 + 2 + 3 2 + 3 + 2 2 + 5 3 + 2 + 2 5 + 2 I'm totally lost with this task. First I figured I'd make an array of usable elements like so: { 2, 2, 2, 3, 3, 5 } (7/2 = 3, so 2 must appear three times. Same goes with 3, which gets two occurences). After that, loop through the array and choose a 'leader' that determines how far in the array we are. I know the explanation is horrible, so here's the code: #include <iostream> #include <vector> int primes_all[25] = {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97}; int main() { int number; std::cin >> number; std::vector<int> primes_used; for(int i = 0; i < 25; i++) { if(primes_all[i] < number && number-primes_all[i] > 1) { for(int k = 0; k < number/primes_all[i]; k++) primes_used.push_back(primes_all[i]); } else break; } int result = 0; for(size_t i = 0; i < primes_used.size(); i++) { int j = primes_used.size()-1; int new_num = number - primes_used[i]; while(new_num > 1 && j > -1) { if(j > -1) while(primes_used[j] > new_num && j > 0) j--; if(j != i && j > -1) { new_num -= primes_used[j]; std::cout << primes_used[i] << " " << primes_used[j] << " " << new_num << std::endl; } j--; } if(new_num == 0) result++; } std::cout << result << std::endl; system("pause"); return 0; } This simply doesn't work. Simply because the idea behind it is wrong. Here's a little details about the limits: • Time limit: 1 second • Memory limit: 128 MB Also, the biggest number that can be given is 100. That's why I made the array of prime numbers below 100. The result grows very fast as the given number gets bigger, and will need a BigInteger class later on, but that's not an issue. A few results known: Input Result 7 5 20 732 80 10343662267187 SO... Any ideas? Is this a combinatory problem? I don't need code, just an idea. I'm still a newbie to C++ but I'll manage Keep in mind that 3 + 2 + 2 is different than 2 + 3 + 2. Also, were the given number to be a prime itself, it won't be counted. For example, if the given number is 7, only these sums are valid: 2 + 2 + 3 2 + 3 + 2 2 + 5 3 + 2 + 2 5 + 2 7 <= excluded - is 3 + 2 + 2 considered different from 2 + 2 + 3? – corsiKa Jan 8 '13 at 16:14 yes it is. 3 + 2 + 2 != 2 + 3 + 2 != 2 + 2 + 3 – Olavi Mustanoja Jan 8 '13 at 16:17 This is related to Goldbach conjecture. – user1929959 Jan 8 '13 at 16:18 ## 3 Answers Dynamic programming is your friend here. Consider the number 27. If 7 has 5 results, and 20 has 732 results, then you know that 27 has at least (732 + 5) results. You can use a two variable system (1 + 26, 2 + 25 ... etc) using the precomputed values for those as you go. You don't have to recompute 25 or 26 because you already did them. - And don't do this in recursion, "because you already did them", but rather in a loop starting with building the sum for 1, then for 2 and so on until you build the sum for your target number. Compute new values by reusing the old values + add the base case "single prime number" in each step. You don't need your set of numbers which can be used (primes_used) – leemes Jan 8 '13 at 15:57 This went over my head. Could you clarify just a bit? – Olavi Mustanoja Jan 8 '13 at 16:19 This won't work in general as proposed here; you're disregarding the identity of the primes and these matter since distinct ordering matters. – Eamon Nerbonne Jan 19 '13 at 13:35 Here's an efficient implementation which uses dynamic programming like corsiKa suggests, but does not use the algorithm he describes. Simply: if n is reachable via k distinct paths (including the single-step one, if it exists), and p is prime, then we construct k paths to n+p by appending p to all paths to n. Considering all such n < N will produce an exhaustive list of valid paths to N. So we just sum the number of paths so discovered. #include <iostream> int primes_all[] = {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97}; const int N_max = 85; typedef long long ways; ways ways_to_reach_N[N_max + 1] = { 1 }; int main() { // find all paths for( int i = 0; i <= N_max; ++i ) { ways ways_to_reach_i = ways_to_reach_N[i]; if (ways_to_reach_i) { for( int* p = primes_all; *p <= N_max - i && p < (&primes_all)[1]; ++p ) { ways_to_reach_N[i + *p] += ways_to_reach_i; } } } // eliminate single-step paths for( int* p = primes_all; *p <= N_max && p < (&primes_all)[1]; ++p ) { --ways_to_reach_N[*p]; } // print results for( int i = 1; i <= N_max; ++i ) { ways ways_to_reach_i = ways_to_reach_N[i]; if (ways_to_reach_i) { std::cout << i << " -- " << ways_to_reach_i << std::endl; } } return 0; } Replacing the typedef ways with a big integer type is left as an exercise to the reader. - The concept you are searching for is the "prime partitions" of a number. S partition of a number is a way of adding numbers to reach the target; for instance, 1+1+2+3 is a partition of 7. If all the addends are prime, then the partition is a prime partition. I think your example is wrong. The number 7 is usually considered to have 3 prime partitions: 2+2+3, 2+5, and 7. The order of the addends doesn't matter. In number theory the function that counts prime partitions is kappa, so we would say kappa(7) = 3. The usual calculation of kappa is done in two parts. The first part is a function to compute the sum of the prime factors of a number; for instance, 42=2·3·7, so sopf(42)=12. Note that sopf(12)=5 because the sum is over only the distinct factors of a number, so even though 12=2·2·3, only one 2 is included in the calculation of the sum. Given sopf, there is a lengthy formula to calculate kappa; I'll give it in LaTeX form, since I don't know how to enter it here: \kappa(n) = \frac{1}{n}\left(\mathrm{sopf}(n) + \sum_{j=1}^{n-1} \mathrm{sopf}(j) \cdot \kappa(n-j)\right). If you actually want a list of the partitions, instead of just the count, there is a dynamic programming solution that @corsiKa pointed out. I discuss prime partitions in more detail at my blog, including source code to produce both the count and the list. - That's some cool info. However, in my case, 3 + 2 + 2 is different than 2 + 3 + 2 is different than 2 + 2 + 3 – Olavi Mustanoja Jan 8 '13 at 16:18 That's not the normal way. First, you should clarify with your instructor exactly what is being requested. Then, if you really want what you say you do, Google for "prime partitions with repetition" or "distinct prime partitions." In any case, be aware that your example of the prime partitions of 7 omits the number 7 itself, which is prime, so by your method there are six ways, not five, to make the partitions of 7. – user448810 Jan 8 '13 at 16:30 Sorry, I thought I mentioned. I'll edit the description of the problem. Thanks for pointing the path, tho :D – Olavi Mustanoja Jan 8 '13 at 16:37 Is this an SPOJ problem? Or some other web coding site? If so, please give a link to the original problem. If it's homework, please post a link to the assignment. – user448810 Jan 8 '13 at 17:05 It's homework, the assignment came from my tutor's mouth and never existed on paper, and it was in finnish. I described the problem as well as I could – Olavi Mustanoja Jan 8 '13 at 17:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6035142540931702, "perplexity": 756.4984218947635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398462987.25/warc/CC-MAIN-20151124205422-00077-ip-10-71-132-137.ec2.internal.warc.gz"}
https://en.wikibooks.org/wiki/A-level_Physics_(Advancing_Physics)/Electric_Potential_Energy
# A-level Physics (Advancing Physics)/Electric Potential Energy Just as an object at a distance r from a sphere has gravitational potential energy, a charge at a distance r from another charge has electrical potential energy εelec. This is given by the formula: ${\displaystyle \epsilon _{elec}=V_{elec}q}$, where Velec is the potential difference between the two charges Q and q. In a uniform field, voltage is given by: ${\displaystyle V_{elec}=E_{elec}d}$, where d is distance, and Eelec is electric field strength. Combining these two formulae, we get: ${\displaystyle \epsilon _{elec}=qE_{elec}d}$ For the field around a point charge, the situation is different. By the same method, we get: ${\displaystyle \epsilon _{elec}={\frac {-kQq}{r}}}$ If a charge loses electric potential energy, it must gain some other sort of energy. You should also note that force is the rate of change of energy with respect to distance, and that, therefore: ${\displaystyle \epsilon _{elec}=\int {F\;dr}}$ ## The Electronvolt The electronvolt (eV) is a unit of energy equal to the charge of a proton or a positron. Its definition is the kinetic energy gained by an electron which has been accelerated through a potential difference of 1V: 1 eV = 1.6 x 10−19 J For example: If a proton has an energy of 5MeV then in Joules it will be = 5 x 106 x 1.6 x 10−19 = 8 x 10−13 J. Using eV is an advantage when high energy particles are involved as in case of particle accelerators. ## Summary of Electric Fields You should now know (if you did the electric fields section in the right order) about four attributes of electric fields: force, field strength, potential energy and potential. These can be summarised by the following table: Force ${\displaystyle F_{elec}={\frac {-kQq}{r^{2}}}}$ → integrate → with respect to r Potential Energy ${\displaystyle \epsilon _{elec}={\frac {-kQq}{r}}}$ ↓ per. unit charge ↓ Field Strength ${\displaystyle E_{elec}={\frac {-kQ}{r^{2}}}}$ → integrate → with respect to r Potential ${\displaystyle V_{elec}={\frac {-kQ}{r}}}$ This table is very similar to that for gravitational fields. The only difference is that field strength and potential are per. unit charge, instead of per. unit mass. This means that field strength is not the same as acceleration. Remember that integrate means 'find the area under the graph' and differentiate (the reverse process) means 'find the gradient of the graph'. ## Questions k = 8.99 x 109 Nm2C−2 1. Convert 5 x 10−13 J to MeV. 2. Convert 0.9 GeV to J. 3. What is the potential energy of an electron at the negatively charged plate of a uniform electric field when the potential difference between the two plates is 100V? 4. What is the potential energy of a 2C charge 2 cm from a 0.5C charge? 5. What is represented by the gradient of a graph of electric potential energy against distance from some charge?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9471825361251831, "perplexity": 560.7196062476639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385378.96/warc/CC-MAIN-20210308143535-20210308173535-00572.warc.gz"}
http://perimeterinstitute.ca/fr/video-library/collection/cosmology-gravitation?page=1
Le contenu de cette page n’est pas disponible en français. Veuillez nous en excuser. # Cosmology & Gravitation This series consists of talks in the areas of Cosmology, Gravitation and Particle Physics. ## Seminar Series Events/Videos Currently there are no upcoming talks in this series. ## Aspects of field theory with higher derivatives Mardi oct 24, 2017 Speaker(s): I will discuss related aspects of field theories with higher-derivative Lagrangians but second-order equations of motion, with a focus on the Lovelock and Horndeski classes that have found use in modifications to general relativity. In the first half I will investigate when restricting to such terms is and is not well-justified from an effective field theory perspective. In the second half I will discuss how non-perturbative effects, like domain walls and quantum tunneling, are modified in the presence of these kinetic terms Collection/Series: Scientific Areas: ## Primordial gravity waves from tidal imprints in large-scale structure Mardi oct 17, 2017 Speaker(s): I will describe a tidal effect whereby the decay of primordial gravity waves leaves a permanent shear in the large-scale structure of the Universe. Future large-scale structure surveys - especially radio surveys of high-redshift hydrogen gas - could measure this shear and its spatial dependence to form a map of the initial gravity-wave field. The three dimensional nature of this probe makes it sensitive to the helicity of the gravity waves, allowing for searches for early-Universe gravitational parity violation. Collection/Series: Scientific Areas: ## Isotropising an anisotropic cyclic cosmology Mardi oct 10, 2017 Speaker(s): Standard models of cosmology use inflation as a mechanism to resolve the isotropy and homogeneity problem of the universe as well as the flatness problem. However, due to various well known problems with the inflationary paradigm, there has been an ongoing search for alternatives. Perhaps the most famous among these is the cyclic universe scenario or scenarios which incorporate bounces. As these scenarios have a contracting phase in the evolution of the universe, it is reasonable to ask whether the problems of homogeneity and isotropy can still be resolved in these scenarios. Collection/Series: Scientific Areas: ## How gravity modifies thermodynamics: Maximal temperature and Poincare recurrence theorem Mardi sep 26, 2017 Speaker(s): Thermodynamics is a closed field of research. The laws of thermodynamics, established in the nineteenth century, are still standing unchallenged.  However, they do not include gravity. Inclusion of gravity into the thermodynamical system can significantly modify the expected behavior of the system. We will demonstrate that gravity dynamically induces a maximal temperature that can be reached in a gas of particles.  We will also show how gravity can significantly change the Poincare recurrence theorem, and sometimes even prevent the recurrence from happening. Collection/Series: Scientific Areas: ## Dynamical chaos as a tool for characterizing multi-planet systems Mardi sep 19, 2017 Speaker(s): Many of the multi-planet systems discovered around other stars are maximally packed. This implies that simulations with masses or orbital parameters too far from the actual values will destabilize on short timescales; thus, long-term dynamics allows one to constrain the orbital architectures of many closely packed multi-planet systems. I will present a recent such application in the TRAPPIST-1 system, with 7 Earth-sized planets in the longest resonant chain discovered to date. In this case the complicated resonant phase space structure allows for strong constraints. Collection/Series: Scientific Areas: ## How Black Holes Dine above the Eddington "Limit" without Overeating or Excessive Belching Mardi sep 12, 2017 Speaker(s): The study of super-Eddington accretion is essential to our understanding of the growth of super-massive black holes in the early universe, the accretion of tidally disrupted stars, and the nature of ultraluminous X-ray sources.  Unfortunately, this mode of accretion is particularly difficult to model because of the multidimensionality of the flow, the importance magnetohydrodynamic turbulence, and the dominant dynamical role played by radiation forces.  However, recent increases in computing power and advances in algorithms are facilitating major improvements in our ability to model radiat Collection/Series: Scientific Areas: ## HIRAX: The Hydrogen Intensity and Real-time Analysis eXperiment Lundi sep 11, 2017 Speaker(s): The 21cm transition of atomic hydrogen is rapidly becoming one of our most powerful tools for probing the evolution of the universe.  The Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) is a planned 1,024-element array to be built in South Africa that will study the (possible) evolution of dark energy from z=0.8 to 2.5. Collection/Series: Scientific Areas: ## Uber-Gravity and H0 tension Mardi aoû 29, 2017 Speaker(s): Recently, the idea of taking ensemble average over gravity models has been introduced. Based on this idea, we study the ensemble average over (effectively) all the gravity models dubbing the name uber-gravity which is a fixed point in the model space. The uber-gravity has interesting universal properties, independent from the choice of basis: i) it mimics Einstein-Hilbert gravity for high-curvature regime, ii) it predicts stronger gravitational force for an intermediate-curvature regime, iii) surprisingly, for low-curvature regime, i.e. Collection/Series: Scientific Areas: ## Universality classes of inflation as phases of condensed matter: slow-roll, solids, gaugids etc. Mardi aoû 22, 2017 Speaker(s): Universality classes of inflation as phases of condensed matter: slow-roll, solids, gaugids etc. Collection/Series: Scientific Areas: ## Baryon Asymmetry and Gravitational Waves from Pseudoscalar Inflation Jeudi aoû 10, 2017 Speaker(s): In models of inflation driven by an axion-like pseudoscalar field, the inflaton, a, may couple to the standard model hypercharge gauge field via a Chern-Simons-type interaction, L ⊃ a F F̃. This coupling results in the explosive production of hypermagnetic fields during inflation, which has two interesting consequences: (1) The primordial hypermagnetic field is maximally helical. It is therefore capable of sourcing the generation of nonzero baryon number around the electroweak phase transition (via the chiral anomaly in the standard model). Collection/Series: Scientific Areas: ## RECENT PUBLIC LECTURE ### Pauline Gagnon: Improbable Feats and Useless Discoveries Speaker: Pauline Gagnon
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175169825553894, "perplexity": 2414.548353810369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887692.13/warc/CC-MAIN-20180119010338-20180119030338-00155.warc.gz"}
http://albanyareamathcircle.blogspot.com/2009/07/leonard-mlodinow-on-probably-blindspots.html
## Wednesday, July 1, 2009 ### Leonard Mlodinow on probability blindspots In 2002, psychologist Daniel Kahneman won a Nobel Prize in Economics for work that pointed out that human beings often have problems reasoning through real world problems involving probabilities. Caltech physicist Leonard Mlodinow has written a fascinating and clearly written new book, The Drunkard's Walk: How Randomness Rules Our Lives, with many beautiful examples illustrating common logical fallacies. You can learn a good deal in a very enjoyable way by reading his book. You might want to start out by watching a talk he gave about his book to Google employees. (See the video above.) As Mlodinow points out, probability blind spots can cause seriously bad decisions in many domains. Many professionals, including physicians, judges, and investors, make errors in reasoning through situations involving probabilities. (Sadly, it appears that medical schools and law schools don't teach much about probability. Business schools DO teach about probabilities, but it's not clear how much actually sinks in.) I personally think the answer is that students need to grow up thinking hard and deeply about probability. The habits of thinking correctly about probabilistic calculations need to be ingrained deeply in all of us long before we become jurors or adult patients, let alone judges or physicians. There is a growing consensus about the need for a really sound mathematical education in probability and statistics. Harvey Mudd math professor Art Benjamin makes a good case for it in his TED Talk video below: Hat tip: Richard Rusczyk
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18040142953395844, "perplexity": 2424.6275598658003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805708.41/warc/CC-MAIN-20171119172232-20171119192232-00764.warc.gz"}
https://ask.openstack.org/en/answers/5127/revisions/
# Revision history [back] it should be "physical_interface_mappings = physnet2:eth1" 2 No.2 Revision smaffulli 6981 ●38 ●68 ●102 http://maffulli.net/ it should be "physical_interface_mappings physical_interface_mappings = physnet2:eth1"physnet2:eth1 and multiple mappings is a comma separated list like: physical_interface_mappings = physnet2:eth1,physnet3:eth2 3 No.3 Revision smaffulli 6981 ●38 ●68 ●102 http://maffulli.net/ it should be physical_interface_mappings = physnet2:eth1 and multiple mappings is a comma separated list like: physical_interface_mappings = physnet2:eth1,physnet3:eth2. The key-value separator is th : it That must be a doc bug. It should be be: physical_interface_mappings = physnet2:eth1physnet2:eth1 and Or for multiple mappings it is a comma separated list like: physical_interface_mappings = physnet2:eth1,physnet3:eth2physnet2:eth1,physnet3:eth2 . The key-value separator in the main value is th :
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41722363233566284, "perplexity": 21088.867142952888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330750.45/warc/CC-MAIN-20190825151521-20190825173521-00325.warc.gz"}
http://meetings.aps.org/Meeting/MAR09/Event/93433
Session A25: Focus Session: Graphene I: Electronic Properties 8:00 AM–11:00 AM, Monday, March 16, 2009 Room: 327 Chair: Jules Carbotte, McMaster University Abstract ID: BAPS.2009.MAR.A25.11 Abstract: A25.00011 : Fermi surface of graphene on Ru(0001) 10:24 AM–10:36 AM MathJax On | Off     Abstract Authors: Thomas Brugger Hugo Dil J\"{u}rg Osterwalder Thomas Greber (Physik-Institut, Universitaet Zuerich, Winterthurerstrasse 190, CH-8057 Zuerich, Switzerland) Bin Wang Marie-Laure Bocquet (Universite de Lyon, Laboratoire de Chimie, Ecole Normale Superieure de Lyon, CNRS, France) Sebastian G\"{u}nther Joost Wintterlin (Department Chemie, Ludwig-Maximilian Universitaet, Butenandtstrasse 5-13, D-81377 Muenchen, Germany) The structure of a single layer graphene on Ru(0001) is compared with that of a single layer hexagonal boron nitride nanomesh on Ru(0001). Both are corrugated sp$^2$ hybridized networks and display a $\pi$-band gap at the $\overline{\rm{K}}$ point of their $1\times1$ Brillouin zone. In contrast to $h$-BN/Ru(0001), g/Ru(0001) has a distinct Fermi surface which indicates that 0.1 electrons per $1\times1$ unit cell are transferred from the Ru substrate to the graphene. Photoemission from adsorbed xenon on g/Ru(0001) identifies two distinct Xe 5p$_{1/2}$ lines, separated by 240 meV, which reveals a corrugated electrostatic potential energy surface like on $h$-BN/Rh(111) [1]. These two Xe species are related to the topography of the template and have different desorption energies.\\[4pt] [1] H. Dil, J. Lobo-Checa, R. Laskowski, P. Blaha, S. Berner, J. Osterwalder, and T.Greber, Science \textbf{319}, 1824 (2008). To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2009.MAR.A25.11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3747164309024811, "perplexity": 23589.15223981465}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770371.28/warc/CC-MAIN-20141217075250-00143-ip-10-231-17-201.ec2.internal.warc.gz"}
https://cstheory.stackexchange.com/questions/9840/what-are-some-problems-where-we-know-we-have-an-optimal-algorithm/9872
# What are some problems where we know we have an optimal algorithm? What are some non-trivial problems where we know the current algorithm we have is the asymptotically optimal one? (For turing machines) And how is this proved? • A turing machine is a tricky model for lower bounds. Changing the defn can change the polynomial in the running time, so you need to be a little more specific. – Suresh Venkat Jan 22 '12 at 2:13 • How do you define non-trivial? – funkstar Jan 24 '12 at 8:03 • As says Suresh, the kind of TM you use has an influence. I guess that for the language of palindromes (words you can read backwards), we have an optimal 1-tape TM which takes $\mathcal O(n^2)$ steps to decide the language. And for 2-tape TMs, it is decidable in linear time, thus pretty much optimal too. – Bruno Jun 27 '12 at 6:41 • – BlueRaja - Danny Pflughoeft Jun 27 '12 at 21:26 Any algorithm which takes linear time and has to read its whole input must be asymptotically optimal. Similarly, as Raphael comments, any algorithm whose runtime is of the same order as output size is optimal. • Similarly, any algorithm whose runtime is of the same order as output size is optimal. – Raphael Jan 21 '12 at 16:57 • I believe this answer and the comment that follows it are the complete state of the art. – Jeffε Jan 22 '12 at 18:35 • Well this was dissapointing – sture Jan 23 '12 at 5:56 • For the record, JɛffE's comment seems to refer to Shir's answer below. – András Salamon Jun 23 '13 at 14:17 • I was referring to Max's answer, not Shir's, and to Raphael's comment on Max's answer. – Jeffε Mar 15 '14 at 22:32 If the complexity measure you are considering is query complexity, i.e., the number of times the machine has to look at the input to solve a particular problem, then there are many problems for which we have optimal algorithms. The reason for this is that lower bounds for query complexity are easier to achieve than lower bounds for time or space complexity, thanks to some popular techniques including the adversary method. The downside, however, is that this complexity measure is almost excusively used in quantum information processing as it provides an easy way of proving a gap between quantum and classical computational power. The most notorious quantum algorithm in this framework is Grover's algorithm. Given a binary string $x_1,\dots ,x_n$ for which there exists a single $i$ such that $x_i=n$, you are required to find $i$. Classically (without a quantum computer), the most trivial algorithm is optimal: you need to query this string $n/2$ times on average in order to find $i$. Grover provided a quantum algorithm that does so in $O(\sqrt n)$ queries to the string. This has also been proven optimal. • Indeed, query complexity is the underlying basis for Max's answer. For most problems, any algorithm provably "has to read the entire input" or at least a constant fraction of the input. – Jeffε Jan 24 '12 at 10:14 • If you are willing to change your model, quite a few lower bounds in data structures are tight. See Lower Bounds for Data Structures for pointers to good references for lower bounds in data structures. • From the $\Omega(n log n)$ bound for sorting in the comparison model that some people have mentioned here, you can obtain a similar bound for the convex hull problem by considering the case where the input is composed of points along the graph of a increasing function in the first quadrant of the plane. • +1 for mentioning data structures. But I don't think it's possible to obtain a useful lower bound for convex hulls via the comparison lower bounds for sorting. The reason is that the comparison model isn't powerful enough to compute convex hulls at all. What works instead is to use a more powerful model such as algebraic decision trees in which hulls can be computed, and then to adapt the lower bound for sorting to this more powerful model. – David Eppstein Jan 24 '12 at 21:20 • Makes sense, thanks for the clarification! – Abel Molina Jan 27 '12 at 9:20 1. Comparison sorting using $O (n \log n)$ comparisons (merge sort, to name one) is optimal, the proof involves simply calculating the height of a tree with $n!$ leaves. 2. Assuming the Unique Games Conjecture, Khot, Kindler, Mossel and O'donnell showed that it is NP-complete to approximate Max-Cut better than Goemans and Williamson's algorithm. So in that sense G&W is optimal (assuming also that $P\neq NP$). 3. Some distributed algorithms can be shown to be optimal with respect to some conditions (e.g., the proportion of adversarial processors), but since you mentioned Turing machines, I guess that's not the type of examples you're looking for. • Whether item 2 answers the question or not depends on what the asker means by “optimal,” although I doubt that the asker is asking in that sense (otherwise there are many, many tight approximability results which do not even require UGC). Moreover, I do not think that either item 1 or 3 answers the question. – Tsuyoshi Ito Jan 21 '12 at 16:31 • @TsuyoshiIto, it's hard to guess what exactly the asker meant, which is what made me try answers in various directions in hope of hitting something useful for him/her. What makes you say that (1) is not a valid answer, by the way? – Shir Jan 21 '12 at 16:34 • The asker specifically asks an algorithm optimal for Turing machine. – Tsuyoshi Ito Jan 21 '12 at 20:21 • Is "comparison sorting" actually a "problem"? Or is it a problem and a restriction on the model of computation? – Jeffε Jan 22 '12 at 18:36 Suppose you are given input $w = \langle M, x, t \rangle$ and are asked to decide if RAM machine $M$ terminates on input $x$ after $t$ steps. By the time hierarchy theorem, the optimal algorithm to decide this is to simulate the execution of $M(x)$ for $t$ steps, which can be done in time $O(t)$. (Note: for Turing machines, simulating the execution of $M$ takes $O(t \log t)$ steps; we only know a lower bound of $\Omega(t)$. So, this is not quite optimal for Turing machines specifically). There are some other problems which contain the version of the halting problem as a sub-case. For example, deciding whether a sentence $\theta$ is a consequence of the WS1S takes time $2 \uparrow \uparrow O(|\theta|)$ and this is optimal. I am unsure what you mean by "non-trivial", but how about this. $L = \{0^{2^k} | k \geq 0\}$. This language is not regular therefore, any TM deciding it must run in $\Omega(n \log n)$. The simple algorithm (crossing every other 0) is optimal. If you allow dynamic data structure problems, we know some super-linear time optimal algorithms. This is in the cell probe model, which is as strong as the word RAM, i.e. this is not a restricted model such as algebraic decision trees. One example is keeping prefix sums under dynamic updates. We start with an array of numbers $A[1], \ldots, A[n]$, and the goal is to keep a data structure that allows the following operations: • Add $\Delta$ to $A[i]$, given $i$ and $\Delta$ • Compute the prefix sum $\sum_{j = 1}^i{A[i]}$, given $i$ You can easily support both operations in $O(\log n)$ time with a data structure based on an augmented binary tree with $A[i]$ at the leaves. Patrascu and Demaine showed this is optimal: for any data structure there is a sequence of $n$ additions and prefix sum queries that must take $\Omega(n\log n)$ time total. Another example is union find: start with a partition of $\{1, \ldots n\}$ into singletons, and keep a data structure that allows the two operations: • Union: given $i$ and $j$, replace the part containing $i$ and the part containing $j$ with their union • Find: given $i$, output a canonical element from the part containing $i$ Tarjan showed that the classical disjoint set forest data structure with the union by rank and path compression heuristics takes $O(\alpha(n))$ time per operation, where $\alpha$ is the inverse Ackermann function. Fredman and Saks showed this is optimal: for any data structure there exists a sequence of $n$ union and find operations which must take $\Omega(n\alpha(n))$ time. Many streaming algorithms have upper bounds matching their lower bounds. there are two somewhat similar search algorithms that [my understanding is] are optimal based on a particular constraints on the input ordering/distribution. however presentations of the algorithms do not typically emphasize this optimality. • golden section search for finding the maximum or minimum (extremum) of a unimodal function. assumes input is a unimodal function. finds it in logarithmic time on average. as I recall there may have been a proof of optimality in the book Structure & Interpretation of computer programs by abelson & sussman. • binary search finds a point in logarithmic time on average in a sorted list, but requires input to be sorted. am citing wikipedia above but it does not have the proofs that they are optimal, maybe some other references that prove optimality can be found by the audience. Many sublinear time algorithms have upper bounds matching their lower bounds. • Flagged as a duplicate. – Jeffε Jan 24 '12 at 10:09 • Sublinear time algorithm and streaming algorithm are different areas. – Bin Fu Jan 24 '12 at 15:09 • That's true, but you should combine the answers into one. – Suresh Venkat Jan 24 '12 at 22:37 • Some examples of optimal sublinear time algorithms can be – Bin Fu Jan 25 '12 at 16:05 • it is also not clear why this is not a duplicate of the query complexity answer. – Artem Kaznatcheev Jan 26 '12 at 16:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7893409132957458, "perplexity": 453.0885405495899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370529375.49/warc/CC-MAIN-20200405053120-20200405083120-00044.warc.gz"}
https://www.ias.ac.in/listing/bibliography/boms/TAPAS_KUILA
• TAPAS KUILA Articles written in Bulletin of Materials Science • Synthesis of iron pyrite with efficient bifunctional electrocatalytic activity towards overall water splitting in alkaline medium Recently, a few investigations were conducted to understand the electrocatalytic activity of the pyrite FeS$_2$ towards hydrogen evolution reaction (HER) in acidic medium. A systematic investigation to understand its catalytic activity towards both HER and oxygen evolution reaction (OER) in alkaline medium is important, but rare. It was found that iron sulphides have inferior H-adsorption efficiency, which hindered their HER catalytic activity. Herein, a novel strategy was undertaken to obtain pyrites having different crystallite sizes and lattice strains. Changes in these crystal parameters affected the physicochemical phenomena occurring at the electrode–electrolyte interface which in turn influenced the electrocatalytic activity of the FeS$_2$ and also altered the pathways of the reactions. The pyrite with the lowest lattice strain and crystallite size (FS3) showed superior catalytic activity towards both HER and OER. The overall water-splitting activity of FS3 was comparable with the state-of-the-art RuO$_2$–Pt/C couple. Moreover, the synthesized pyrites showed the capability to overcome the previously mentioned drawback i.e., inferior H-adsorption efficiency. The pyrite FeS$_2$ could be a potential candidate as a cheap and efficient bifunctional electrocatalyst for overall water splitting.This investigation demonstrates that modulation of crystal parameters can be an efficient technique to tune the activity ofan electrocatalyst. • # Bulletin of Materials Science Volume 45, 2022 All articles Continuous Article Publishing mode • # Dr Shanti Swarup Bhatnagar for Science and Technology Posted on October 12, 2020 Prof. Subi Jacob George — Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bengaluru Chemical Sciences 2020
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5134560465812683, "perplexity": 16196.721500056146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301730.31/warc/CC-MAIN-20220120065949-20220120095949-00200.warc.gz"}
https://admin.clutchprep.com/chemistry/practice-problems/98746/write-the-chemical-formula-for-the-anion-present-in-the-aqueous-solution-of-agno
# Problem: Write the chemical formula for the anion present in the aqueous solution of AgNO3. ###### FREE Expert Solution AgNO3 silver → most common charge = +1 → Ag+ ###### Problem Details Write the chemical formula for the anion present in the aqueous solution of AgNO3.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9430801868438721, "perplexity": 4983.116565183915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655908294.32/warc/CC-MAIN-20200710113143-20200710143143-00509.warc.gz"}
http://mathhelpforum.com/differential-equations/140105-very-stuck.html
# Thread: very stuck 1. ## very stuck hi all, first post, im very stuck on this question its a practice paper for my degree exam in may, can you guys have a look and point me in the right direction im completely lost In the equation e=√a/h or e=sqrt(a/h), the SI units of the quantity a are kg m–1 s–2 and the SI units of the quantity h are kg m–3. What are the correct SI units for the quantity e? You should express your answer in terms of the simplest possible arrangement of base units and use the conventional symbols for those units. Answer: SI units of e = thanks in advance 2. Originally Posted by leoleo hi all, first post, im very stuck on this question its a practice paper for my degree exam in may, can you guys have a look and point me in the right direction im completely lost In the equation e=√a/h or e=sqrt(a/h), the SI units of the quantity a are kg m–1 s–2 and the SI units of the quantity h are kg m–3. What are the correct SI units for the quantity e? You should express your answer in terms of the simplest possible arrangement of base units and use the conventional symbols for those units. Answer: SI units of e = thanks in advance First of all simplify the expression inside the square root $\frac{a}{h} = \frac{\text{ kg m}^{-1} \text{ s}^{-2}}{\text {kg m}^{-3}}$ kg will cancel and you can simplify m remembering the laws of indices From there take the square root by halving each power. You should get an answer of $\text{m s}^{-1}$ which is speed/velocity.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7151772379875183, "perplexity": 1500.012172331345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607848.9/warc/CC-MAIN-20170524152539-20170524172539-00043.warc.gz"}
https://www.physicsforums.com/threads/density-of-states-confusion.271317/
# Density of States Confusion 1. Nov 12, 2008 ### Vanush "the density of states (DOS) of a system describes the number of states at each energy level that are available to be occupied. " But I thought there can't be more than 1 electron in a state? How does DoS have any meaning when dealing with eleectrons? 2. Nov 12, 2008 ### nicksauce My understanding is as follows: The density of states, g(E), tells you the number of possible states at each energy. Since these states are degenerate, you can have one electron in different states at the same energy. The expected number of electrons in a given energy state, f(E), is calculated using Fermi-Dirac statistics. http://en.wikipedia.org/wiki/Fermi-Dirac_statistics This can be no more than 1 because of the Pauli exclusion principle. So then the total number of electrons at a given energy would be f(E)g(E). 3. Nov 12, 2008 ### weejee More precisely, g(E) = (# of states between E and E+dE) / (dE) In a finite system, it is always a series of delta functions. As the system size gets bigger so that we can assume that it is in the thermodynamic limit, we smooth out the delta functions to get a continuous version of g(E). 4. Nov 13, 2008 ### Vanush Why do electron states split into bands in solids if states exist for electrons that have the same energy level
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9468716382980347, "perplexity": 535.7530460163164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00060-ip-10-171-10-108.ec2.internal.warc.gz"}
https://terrytao.wordpress.com/2016/02/27/finite-time-blowup-for-a-supercritical-defocusing-nonlinear-wave-system/
I’ve just uploaded to the arXiv my paper Finite time blowup for a supercritical defocusing nonlinear wave system, submitted to Analysis and PDE. This paper was inspired by a question asked of me by Sergiu Klainerman recently, regarding whether there were any analogues of my blowup example for Navier-Stokes type equations in the setting of nonlinear wave equations. Recall that the defocusing nonlinear wave (NLW) equation reads $\displaystyle \Box u = |u|^{p-1} u \ \ \ \ \ (1)$ where ${u: {\bf R}^{1+d} \rightarrow {\bf R}}$ is the unknown scalar field, ${\Box = -\partial_t^2 + \Delta}$ is the d’Alambertian operator, and ${p>1}$ is an exponent. We can generalise this equation to the defocusing nonlinear wave system $\displaystyle \Box u = (\nabla F)(u) \ \ \ \ \ (2)$ where ${u: {\bf R}^{1+d} \rightarrow {\bf R}^m}$ is now a system of scalar fields, and ${F: {\bf R}^m \rightarrow {\bf R}}$ is a potential which is homogeneous of degree ${p+1}$ and strictly positive away from the origin; the scalar equation corresponds to the case where ${m=1}$ and ${F(u) = \frac{1}{p+1} |u|^{p+1}}$. We will be interested in smooth solutions ${u}$ to (2). It is only natural to restrict to the smooth category when the potential ${F}$ is also smooth; unfortunately, if one requires ${F}$ to be homogeneous of order ${p+1}$ all the way down to the origin, then ${F}$ cannot be smooth unless it is identically zero or ${p+1}$ is an odd integer. This is too restrictive for us, so we will only require that ${F}$ be homogeneous away from the origin (e.g. outside the unit ball). In any event it is the behaviour of ${F(u)}$ for large ${u}$ which will be decisive in understanding regularity or blowup for the equation (2). Formally, solutions to the equation (2) enjoy a conserved energy $\displaystyle E[u] = \int_{{\bf R}^d} \frac{1}{2} \|\partial_t u \|^2 + \frac{1}{2} \| \nabla_x u \|^2 + F(u)\ dx.$ Using this conserved energy, it is possible to establish global regularity for the Cauchy problem (2) in the energy-subcritical case when ${d \leq 2}$, or when ${d \geq 3}$ and ${p < 1+\frac{4}{d-2}}$. This means that for any smooth initial position ${u_0: {\bf R}^d \rightarrow {\bf R}^m}$ and initial velocity ${u_1: {\bf R}^d \rightarrow {\bf R}^m}$, there exists a (unique) smooth global solution ${u: {\bf R}^{1+d} \rightarrow {\bf R}^m}$ to the equation (2) with ${u(0,x) = u_0(x)}$ and ${\partial_t u(0,x) = u_1(x)}$. These classical global regularity results (essentially due to Jörgens) were famously extended to the energy-critical case when ${d \geq 3}$ and ${p = 1 + \frac{4}{d-2}}$ by Grillakis, Struwe, and Shatah-Struwe (though for various technical reasons, the global regularity component of these results was limited to the range ${3 \leq d \leq 7}$). A key tool used in the energy-critical theory is the Morawetz estimate $\displaystyle \int_0^T \int_{{\bf R}^d} \frac{|u(t,x)|^{p+1}}{|x|}\ dx dt \lesssim E[u]$ which can be proven by manipulating the properties of the stress-energy tensor $\displaystyle T_{\alpha \beta} = \langle \partial_\alpha u, \partial_\beta u \rangle - \frac{1}{2} \eta_{\alpha \beta} (\langle \partial^\gamma u, \partial_\gamma u \rangle + F(u))$ (with the usual summation conventions involving the Minkowski metric ${\eta_{\alpha \beta} dx^\alpha dx^\beta = -dt^2 + |dx|^2}$) and in particular exploiting the divergence-free nature of this tensor: ${\partial^\beta T_{\alpha \beta}}$ See for instance the text of Shatah-Struwe, or my own PDE book, for more details. The energy-critical regularity results have also been extended to slightly supercritical settings in which the potential grows by a logarithmic factor or so faster than the critical rate; see the results of myself and of Roy. This leaves the question of global regularity for the energy supercritical case when ${d \geq 3}$ and ${p > 1+\frac{4}{d-2}}$. On the one hand, global smooth solutions are known for small data (if ${F}$ vanishes to sufficiently high order at the origin, see e.g. the work of Lindblad and Sogge), and global weak solutions for large data were constructed long ago by Segal. On the other hand, the solution map, if it exists, is known to be extremely unstable, particularly at high frequencies; see for instance this paper of Lebeau, this paper of Christ, Colliander, and myself, this paper of Brenner and Kumlin, or this paper of Ibrahim, Majdoub, and Masmoudi for various formulations of this instability. In the case of the focusing NLW ${-\partial_{tt} u + \Delta u = - |u|^{p-1} u}$, one can easily create solutions that blow up in finite time by ODE constructions, for instance one can take ${u(t,x) = c (1-t)^{-\frac{2}{p-1}}}$ with ${c = (\frac{2(p+1)}{(p-1)^2})^{\frac{1}{p-1}}}$, which blows up as ${t}$ approaches ${1}$. However the situation in the defocusing supercritical case is less clear. The strongest positive results are of Kenig-Merle and Killip-Visan, which show (under some additional technical hypotheses) that global regularity for such equations holds under the additional assumption that the critical Sobolev norm of the solution stays bounded. Roughly speaking, this shows that “Type II blowup” cannot occur for (2). Our main result is that finite time blowup can in fact occur, at least for three-dimensional systems where the number ${m}$ of degrees of freedom is sufficiently large: Theorem 1 Let ${d=3}$, ${p > 5}$, and ${m \geq 76}$. Then there exists a smooth potential ${F: {\bf R}^m \rightarrow {\bf R}}$, positive and homogeneous of degree ${p+1}$ away from the origin, and a solution to (2) with smooth initial data that develops a singularity in finite time. The rather large lower bound of ${76}$ on ${m}$ here is primarily due to our use of the Nash embedding theorem (which is the first time I have actually had to use this theorem in an application!). It can certainly be lowered, but unfortunately our methods do not seem to be able to bring ${m}$ all the way down to ${1}$, so we do not directly exhibit finite time blowup for the scalar supercritical defocusing NLW. Nevertheless, this result presents a barrier to any attempt to prove global regularity for that equation, in that it must somehow use a property of the scalar equation which is not available for systems. It is likely that the methods can be adapted to higher dimensions than three, but we take advantage of some special structure to the equations in three dimensions (related to the strong Huygens principle) which does not seem to be available in higher dimensions. The blowup will in fact be of discrete self-similar type in a backwards light cone, thus ${u}$ will obey a relation of the form $\displaystyle u(e^S t, e^S x) = e^{-\frac{2}{p-1} S} u(t,x)$ for some fixed ${S>0}$ (the exponent ${-\frac{2}{p-1}}$ is mandated by dimensional analysis considerations). It would be natural to consider continuously self-similar solutions (in which the above relation holds for all ${S}$, not just one ${S}$). And rough self-similar solutions have been constructed in the literature by perturbative methods (see this paper of Planchon, or this paper of Ribaud and Youssfi). However, it turns out that continuously self-similar solutions to a defocusing equation have to obey an additional monotonicity formula which causes them to not exist in three spatial dimensions; this argument is given in my paper. So we have to work just with discretely self-similar solutions. Because of the discrete self-similarity, the finite time blowup solution will be “locally Type II” in the sense that scale-invariant norms inside the backwards light cone stay bounded as one approaches the singularity. But it will not be “globally Type II” in that scale-invariant norms stay bounded outside the light cone as well; indeed energy will leak from the light cone at every scale. This is consistent with the results of Kenig-Merle and Killip-Visan which preclude “globally Type II” blowup solutions to these equations in many cases. We now sketch the arguments used to prove this theorem. Usually when studying the NLW, we think of the potential ${F}$ (and the initial data ${u_0,u_1}$) as being given in advance, and then try to solve for ${u}$ as an unknown field. However, in this problem we have the freedom to select ${F}$. So we can look at this problem from a “backwards” direction: we first choose the field ${u}$, and then fit the potential ${F}$ (and the initial data) to match that field. Now, one cannot write down a completely arbitrary field ${u}$ and hope to find a potential ${F}$ obeying (2), as there are some constraints coming from the homogeneity of ${F}$. Namely, from the Euler identity $\displaystyle \langle u, (\nabla F)(u) \rangle = (p+1) F(u)$ we see that ${F(u)}$ can be recovered from (2) by the formula $\displaystyle F(u) = \frac{1}{p+1} \langle u, \Box u \rangle \ \ \ \ \ (3)$ so the defocusing nature of ${F}$ imposes a constraint $\displaystyle \langle u, \Box u \rangle > 0.$ Furthermore, taking a derivative of (3) we obtain another constraining equation $\displaystyle \langle \partial_\alpha u, \Box u \rangle = \frac{1}{p+1} \partial_\alpha \langle u, \Box u \rangle$ that does not explicitly involve the potential ${F}$. Actually, one can write this equation in the more familiar form $\displaystyle \partial^\beta T_{\alpha \beta} = 0$ where ${T_{\alpha \beta}}$ is the stress-energy tensor $\displaystyle T_{\alpha \beta} = \langle \partial_\alpha u, \partial_\beta u \rangle - \frac{1}{2} \eta_{\alpha \beta} (\langle \partial^\gamma u, \partial_\gamma u \rangle + \frac{1}{p+1} \langle u, \Box u \rangle),$ now written in a manner that does not explicitly involve ${F}$. With this reformulation, this suggests a strategy for locating ${u}$: first one selects a stress-energy tensor ${T_{\alpha \beta}}$ that is divergence-free and obeys suitable positive definiteness and self-similarity properties, and then locates a self-similar map ${u}$ from the backwards light cone to ${{\bf R}^m}$ that has that stress-energy tensor (one also needs the map ${u}$ (or more precisely the direction component ${u/\|u\|}$ of that map) injective up to the discrete self-similarity, in order to define ${F(u)}$ consistently). If the stress-energy tensor was replaced by the simpler “energy tensor” $\displaystyle E_{\alpha \beta} = \langle \partial_\alpha u, \partial_\beta u \rangle$ then the question of constructing an (injective) map ${u}$ with the specified energy tensor is precisely the embedding problem that was famously solved by Nash (viewing ${E_{\alpha \beta}}$ as a Riemannian metric on the domain of ${u}$, which in this case is a backwards light cone quotiented by a discrete self-similarity to make it compact). It turns out that one can adapt the Nash embedding theorem to also work with the stress-energy tensor as well (as long as one also specifies the mass density ${M = \|u\|^2}$, and as long as a certain positive definiteness property, related to the positive semi-definiteness of Gram matrices, is obeyed). Here is where the dimension ${76}$ shows up: Proposition 2 Let ${M}$ be a smooth compact Riemannian ${4}$-manifold, and let ${m \geq 76}$. Then ${M}$ smoothly isometrically embeds into the sphere ${S^{m-1}}$. Proof: The Nash embedding theorem (in the form given in this ICM lecture of Gunther) shows that ${M}$ can be smoothly isometrically embedded into ${{\bf R}^{19}}$, and thus in ${[-R,R]^{19}}$ for some large ${R}$. Using an irrational slope, the interval ${[-R,R]}$ can be smoothly isometrically embedded into the ${2}$-torus ${\frac{1}{\sqrt{38}} (S^1 \times S^1)}$, and so ${[-R,R]^{19}}$ and hence ${M}$ can be smoothly embedded in ${\frac{1}{\sqrt{38}} (S^1)^{38}}$. But from Pythagoras’ theorem, ${\frac{1}{\sqrt{38}} (S^1)^{38}}$ can be identified with a subset of ${S^{m-1}}$ for any ${m \geq 76}$, and the claim follows. $\Box$ One can presumably improve upon the bound ${76}$ by being more efficient with the embeddings (e.g. by modifying the proof of Nash embedding to embed directly into a round sphere), but I did not try to optimise the bound here. The remaining task is to construct the stress-energy tensor ${T_{\alpha \beta}}$. One can reduce to tensors that are invariant with respect to rotations around the spatial origin, but this still leaves a fair amount of degrees of freedom (it turns out that there are four fields that need to be specified, which are denoted ${M, E_{tt}, E_{tr}, E_{rr}}$ in my paper). However a small miracle occurs in three spatial dimensions, in that the divergence-free condition involves only two of the four degrees of freedom (or three out of four, depending on whether one considers a function that is even or odd in ${r}$ to only be half a degree of freedom). This is easiest to illustrate with the scalar NLW (1). Assuming spherical symmetry, this equation becomes $\displaystyle - \partial_{tt} u + \partial_{rr} u + \frac{2}{r} \partial_r u = |u|^{p-1} u.$ Making the substitution ${\phi := ru}$, we can eliminate the lower order term ${\frac{2}{r} \partial_r}$ completely to obtain $\displaystyle - \partial_{tt} \phi + \partial_{rr} \phi= \frac{1}{r^{p-1}} |\phi|^{p-1} \phi.$ (This can be compared with the situation in higher dimensions, in which an undesirable zeroth order term ${\frac{(d-1)(d-3)}{r^2} \phi}$ shows up.) In particular, if one introduces the null energy density $\displaystyle e_+ := \frac{1}{2} |\partial_t \phi + \partial_r \phi|^2$ and the potential energy density $\displaystyle V := \frac{|\phi|^{p+1}}{(p+1) r^{p-1}}$ then one can verify the equation $\displaystyle (\partial_t - \partial_r) e_+ + (\partial_t + \partial_r) V = - \frac{p-1}{r} V$ which can be viewed as a transport equation for ${e_+}$ with forcing term depending on ${V}$ (or vice versa), and is thus quite easy to solve explicitly by choosing one of these fields and then solving for the other. As it turns out, once one is in the supercritical regime ${p>5}$, one can solve this equation while giving ${e_+}$ and ${V}$ the right homogeneity (they have to be homogeneous of order ${-\frac{4}{p-1}}$, which is greater than ${-1}$ in the supercritical case) and positivity properties, and from this it is possible to prescribe all the other fields one needs to satisfy the conclusions of the main theorem. (It turns out that ${e_+}$ and ${V}$ will be concentrated near the boundary of the light cone, so this is how the solution ${u}$ will concentrate also.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 133, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566774368286133, "perplexity": 182.58614102848696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00202-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-161-problem-12e-calculus-early-transcendentals-8th-edition/9781285741550/match-the-vector-fields-f-with-the-plots-labeled-i-iv-give-reasons-for-your-choices-12-fx-y/2dc0439b-52f4-11e9-8385-02ee952b546e
Chapter 16.1, Problem 12E ### Calculus: Early Transcendentals 8th Edition James Stewart ISBN: 9781285741550 Chapter Section ### Calculus: Early Transcendentals 8th Edition James Stewart ISBN: 9781285741550 Textbook Problem # Match the vector fields F with the plots labeled I-IV. Give reasons for your choices.12. F(x, y) = ⟨y, x − y⟩ To determine To match: The vector field F(x,y)=y,xy with the plots labeled as I-IV. Explanation Given data: The vector field F(x,y)=y,xy. Formula used: The expression for the length of the two-dimensional vector F=x,y, |F(x,y)|=x2+y2 (1) Find the length of F(x,y) using equation (1). |F(x,y)|=(y)2+(xy)2 Consider a certain interval of x as (2,2) and y as (2,2) to plot F(x,y). The estimated values of |F(x,y)| and F(x,y) for different values of x and y are shown in Table 1. Table 1 Quadrant (x,y) |F(x,y)|=(y)2+(x−y)2 F(x,y)=〈y,x−y〉 I (0,0) 0 〈0,0〉 (1,0) 1 〈0,1〉 (2,0) 2 〈0,2〉 (0,1) 2 〈1,−1〉 (1,1) 1 〈1,0〉 (0,2) 22 < ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution #### The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started #### Convert the expressions in Exercises 6584 to power form. 35x2 Finite Mathematics and Applied Calculus (MindTap Course List)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318315148353577, "perplexity": 10977.564093345929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660829.5/warc/CC-MAIN-20191015231925-20191016015425-00410.warc.gz"}
https://www.physicsforums.com/threads/input-impedance-of-bjt-amplifier.569541/
# Input impedance of BJT amplifier #### likephysics 634 1 1. The problem statement, all variables and given/known data Find input resistance of the ckt (see attached) 2. Relevant equations 3. The attempt at a solution Q2 is diode connected, so I replaced Q2 with VBE resistance (rbe or r∏). So, Rin is rbe1+(β+1)rbe2 But the answer is somewhat different, it's rbe1+ (β+1) (rbe2||1/gm2) Where did 1/gm2 come from? #### Attachments • 3.9 KB Views: 342 Related Engineering and Comp. Sci. Homework News on Phys.org #### vk6kro 4,081 40 Q2 is also a transistor and it draws collector current as well as base current. So, this affects the total resistance of Q2 in this circuit. #### likephysics 634 1 I tried to draw an equivalent diagram and solve. see attachment. Basically, rbe and gm*vbe are both shorted (because of collector base short of Q2). I attached a test source Vx to determine the input impedance of just Q2. So impedance will be Vx/ix. After solving, I got 1/gm (assuming β>>1). From the equivalent ckt, Vx=Vbe ix = Vbe/rbe +gmVbe ix=Vbe (1/rbe+gm) ix = Vx (1/rbe+gm) Rin = Vx/ix = 1/(1/rbe+gm) rbe = β/gm Rin = 1/(gm/β +gm) Rin = 1/gm((1+β)/β)) if β>>1, then (1+β)/β = 1 Rin = 1/gm #### Attachments • 3.7 KB Views: 320 ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8065357208251953, "perplexity": 25892.607905571214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528141.87/warc/CC-MAIN-20190722154408-20190722180408-00119.warc.gz"}
https://alldimensions.fandom.com/wiki/Iotaultraverse
## FANDOM 2,852 Pages The Iotaultraverse is the first Ultraverse, every Iotaultraverse has 1 light beam, but 1 in 1010100 there is a chance of a Iotaultraverse have 2 light beams. Community content is available under CC-BY-SA unless otherwise noted.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850624680519104, "perplexity": 6604.743764468609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672313.95/warc/CC-MAIN-20191123005913-20191123034913-00030.warc.gz"}
https://ai.stackexchange.com/tags/function-approximation/hot
# Tag Info 17 There are multiple papers on the topic because there have been multiple attempts to prove that neural networks are universal (i.e. they can approximate any continuous function) from slightly different perspectives and using slightly different assumptions (e.g. assuming that certain activation functions are used). Note that these proofs tell you that neural ... 10 Here's an intuitive description answer: Function approximation can be done with any parameterizable function. Consider the problem of a $Q(s,a)$ space where $s$ is the positive reals, $a$ is $0$ or $1$, and the true Q-function is $Q(s, 0) = s^2$, and $Q(s, 1)= 2s^2$, for all states. If your function approximator is $Q(s, a) = m*s + n*a + b$, there exists no ... 9 Any supervised learning (SL) problem can be cast as an equivalent reinforcement learning (RL) one. Suppose you have the training dataset $\mathcal{D} = \{ (x_i, y_i \}_{i=1}^N$, where $x_i$ is an observation and $y_i$ the corresponding label. Then let $x_i$ be a state and let $f(x_i) = \hat{y}_i$, where $f$ is your (current) model, be an action. So, the ... 6 Before anything, the function you have wrote for the network lacks the bias variables (I'm sure you used bias to get those beautiful images, otherwise your tanh network had to start from zero). Generally I would say it's impossible to have a good approximation of sinus with just 3 neurons, but if you want to consider one period of sinus, then you can do ... 5 The problem you discuss extends past the machine but to the man behind the machine (or woman). ML can be broken down into 3 components, the model, the data, and the learning procedure. This by the way extends to us as well. The model is our brain, the data is our experience and sensory input, and the learning procedure is there but unknown (for now $<$... 5 Let us suppose we have a network without any functions in between. Each layer consists of a linear function. i.e layer_output = Weights.layer_input + bias Consider a 2 layer neural network, the outputs from layer one will be: x2 = W1*x1 + b1 Now we pass the same input to the second layer, which will be x3 = W2x*2 + b2 Also x2 = W1*x1 + b1 Substituting ... 5 As far as I'm aware, it is still somewhat of an open problem to get a really clear, formal understanding of exactly why / when we get a lack of convergence -- or, worse, sometimes a danger of divergence. It is typically attributed to the "deadly triad" (see 11.3 of the second edition of Sutton and Barto's book), the combination of: Function approximation, ... 5 Nonlinear relations between input and output can be achieved by using a nonlinear activation function on the value of each neuron, before it's passed on to the neurons in the next layer. 5 Inherently, no. The MLP is just a data structure. It represents a function, but a standard MLP is just representing an input-output mapping, and there's no recursive structure to it. On the other hand, possibly your source is referring to the common algorithms that operate over MLPs, specifically forward propagation for prediction and back propagation for ... 4 One of the important qualifications of the Universal approximation theorem is that the neural network approximation may be computationally infeasible. "A feedforward network with a single layer is sufficient to represent any function, but the layer may be infeasibly large and may fail to learn and generalize correctly." - Ian Goodfellow, DLB I can't ... 4 First, you need to consider what are the "parameters" of this "optimization algorithm" that you want to "optimize". Let's take the most simple case, a SGD without momentum. The update rule for this optimizer is: $$w_{t+1} \leftarrow w_{t} - a \cdot \nabla_{w_{t}} J(w_t) = w_{t} - a \cdot g_t$$ where $w_t$ are the weights at iteration $t$, $J$ is the cost ... 3 "Modern" Guarantees for Feed-Forward Neural Networks My answer will complement nbro's above, which gave a very nice overview of universal approximation theorems for different types of commonly used architectures, by focusing on recent developments specifically for feed-forward networks. I'll try an emphasis depth over breadth (sometimes called ... 3 Sure, you can define plenty of things we don't generally need to regard as recursive as so. An MLP is just a series of functions applied to its input. This can be loosely formulated as $$o_n = f(o_{n-1})$$ Where $o_n$ is the output of layer $n$. But this clearly doesn't reveal, much does it? 3 Of course, it's possible to define a problem where there is no relationship between input $x$ and output $y$. In general, if the mutual information between $x$ and $y$ is zero (i.e. $x$ and $y$ are statistically independent) then the best prediction you can do is independent of $x$. The task of machine learning is to learn a distribution $q(y|x)$ that is as ... 3 You can indeed fit a polynomial to your labelled data, which is known as polynomial regression (which can e.g. be done with the function numpy.polyfit). One apparent limitation of polynomial regression is that, in practice, you need to assume that your data follows some specific polynomial of some degree $n$, i.e. you assume that your data has the form of ... 3 You can choose those states, but is the agent aware of the state it is in? From the text, it seems that the agent cannot distinguish between the three states. Its observation function is completely uninformative. This is why a stochastic policy is what is needed. This is common for POMDPs, whereas for regular MDPs we can always find a deterministic policy ... 3 First I will address the issue of Tabular methods. These do not use SGD at all. Although the updates are very similar to an SGD update there is no gradient here and so we are not using SGD. Many Tabular methods are proven to converge, for instance the paper by Chris Watkins titled "Q-Learning" introduces and proves that Q-learning converges. Also ... 3 The notion of a state in reinforcement learning is (more or less) the same as the notion of a context in contextual bandits. The main difference is that, in reinforcement learning, an action $a_t$ in state $s_t$ not only affects the reward $r_r$ that the agent will get but it will also affect the next state $s_{t+1}$ the agent will end up in, while, in ... 3 Conceptually, in general, how is the context being handled in CB, compared to states in RL? In terms of its place in the description of Contextual Bandits and Reinforcement Learning, context in CB is an exact analog for state in RL. The framework for RL is a strict generalisation of CB, and can be made similar or the same in a few separate ways: If the ... 2 To answer this, it's helpful to consider the notion of a neural network architecture – in this context, we can think of the architecture as being the network depth (i.e. number of layers), width (i.e. number of nodes in a layer), and some other structural aspects, such as recurrent layers, convolution layers, pool layers, etc. Theory In terms of the ... 2 There are a variety of possible things that could be wrong, but let me give you some potentially useful information. Neural networks with ReLU activation functions are Turing complete for a computation with on order as many steps as the network contains nodes - for a recurrent network (an RNN), that means the same level of turing completeness as any finite ... 2 I have found some clues in Maei's thesis (2011): “Gradient Temporal-Difference Learning Algorithms.” According to the thesis: GTD2 is a method that minimizes the projected Bellman error (MSPBE). GTD2 is convergent in non-linear function approximation case (and off-policy). GTD2 converges to a TD-fixed point (same point as semi-gradient TD). GTD2 is slower ... 2 There are three problems Limited capacity Neural Network (explained by John) Non-stationary Target Non-stationary distribution Non-stationary Target In tabular Q-learning, when we update a Q-value, other Q-values in the table don't get affected by this. But in neural networks, one update to the weights aiming to alter one Q-value ends up affecting other Q-... 2 Andrej Karpathy's blog has a tutorial on getting a neural network to learn pong with reinforcement learning. His commentary on the current state of the field is interesting. He also provides a whole bunch of links (David Silver's course catches my eye). Here is a working link to the lecture videos. Here are demos of DeepMinds game playing. Get links to the ... 2 To check if a function is linear is easy: if you can train one fully connected layer, without activations, of the right dimensions (for a function $\mathbb{R}^n \rightarrow \mathbb{R}^m$ you need $nm$ weights aka the matrix corresponding to the linear application), with enough data, to 100% accuracy... then it is linear. The estimated function is explicit: ... 2 In my humble opinion, it seems like it is important to have them separated, if having a certain card can influence the result in some way that is not its prime value, instead of not only using the sum. But it depends on the game and its rules. For example: If having 5 cards of hearts in the set of 15 cards makes you win the game, then if you only represent ... 2 By itself, I'm not sure it's possible to know. It's possible the slides were old. Or, the intended purpose was to mention how as sigmoid ranges from 0 to 1. Mostly, it looks like it was intended to bring up gradient descent. But it could also be an entry point to the discussion of other methods such as ReLU. Either that or perhaps some sort of norming ... 2 We usually optimize with respect to something. For example, you can train a neural network to locate cats in an image. This operation of locating cats in an image can be thought of as a function: given an image, a neural network can be trained to return the position of the cat in the image. In this sense, we can optimize a neural network with respect to this ... 2 It is not so much the problem of using Reinforcement Learning to train the neural networks, it is the assumptions made about the data given to standard Neural Networks. They are not capable of handling strongly correlated data which is one of the motivations for introducing Recurrent Neural Networks, as they can handle this correlated data well. 2 First of all, neural networks are not (just) defined by the fact that they are typically trained with gradient descent and back-propagation. In fact, there are other ways of training neural networks, such as evolutionary algorithms and the Hebb's rule (e.g. Hopfield networks are typically associated with this Hebbian learning rule). The first difference ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8656224012374878, "perplexity": 441.68717800463014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00495.warc.gz"}
http://groupprops.subwiki.org/wiki/Corollary_of_Timmesfeld's_replacement_theorem_for_abelian_subgroups
# Corollary of Timmesfeld's replacement theorem for abelian subgroups This article defines a subgroup property: a property that can be evaluated to true/false given a group and a subgroup thereof, invariant under subgroup equivalence. View a complete list of subgroup properties[SHOW MORE] Suppose is a group of prime power order. Let denote the set of abelian subgroups of maximum order in . If , and is an -invariant abelian subgroup of . Then, is an abelian subgroup of maximum order.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8349406123161316, "perplexity": 773.618275796816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544679.86/warc/CC-MAIN-20161202170904-00194-ip-10-31-129-80.ec2.internal.warc.gz"}
https://runestone.academy/runestone/static/fopp/Functions/Exercises.html
12.16. Exercises¶ 1. Write a function named num_test that takes a number as input. If the number is greater than 10, the function should return “Greater than 10.” If the number is less than 10, the function should return “Less than 10.” If the number is equal to 10, the function should return “Equal to 10.” 2. Write a function that will return the number of digits in an integer. 3. Write a function that reverses its string argument. 4. Write a function that mirrors its string argument, generating a string containing the original string and the string backwards. 5. Write a function that removes all occurrences of a given letter from a string. 6. Although Python provides us with many list methods, it is good practice and very instructive to think about how they are implemented. Implement a Python function that works like the following: 1. count 2. in 3. reverse 4. index 5. insert 7. Write a function replace(s, old, new) that replaces all occurences of old with new in a string s: test(replace('Mississippi', 'i', 'I'), 'MIssIssIppI') s = 'I love spom! Spom is my favorite food. Spom, spom, spom, yum!' test(replace(s, 'om', 'am'), 'I love spam! Spam is my favorite food. Spam, spam, spam, yum!') test(replace(s, 'o', 'a'), 'I lave spam! Spam is my favarite faad. Spam, spam, spam, yum!') Hint: use the split and join methods. 8. Write a Python function that will take a the list of 100 random integers between 0 and 1000 and return the maximum value. (Note: there is a builtin function named max but pretend you cannot use it.) 9. Write a function sum_of_squares(xs) that computes the sum of the squares of the numbers in the list xs. For example, sum_of_squares([2, 3, 4]) should return 4+9+16 which is 29: 10. Write a function to count how many odd numbers are in a list. 11. Sum up all the even numbers in a list. 12. Sum up all the negative numbers in a list. 13. Write a function findHypot. The function will be given the length of two sides of a right-angled triangle and it should return the length of the hypotenuse. (Hint: x ** 0.5 will return the square root, or use sqrt from the math module) 14. Write a function called is_even(n) that takes an integer as an argument and returns True if the argument is an even number and False if it is odd. 15. Now write the function is_odd(n) that returns True when n is odd and False otherwise. 16. Write a function is_rightangled which, given the length of three sides of a triangle, will determine whether the triangle is right-angled. Assume that the third argument to the function is always the longest side. It will return True if the triangle is right-angled, or False otherwise. Hint: floating point arithmetic is not always exactly accurate, so it is not safe to test floating point numbers for equality. If a good programmer wants to know whether x is equal or close enough to y, they would probably code it up as if abs(x - y) < 0.001: # if x is approximately equal to y ... Next Section - 12.17. Chapter Assessment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19118644297122955, "perplexity": 966.2969068907136}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482347.44/warc/CC-MAIN-20190217172628-20190217194628-00401.warc.gz"}
https://export.arxiv.org/abs/1908.00083
math.CO (what is this?) # Title: Cyclic sieving, skew Macdonald polynomials and Schur positivity Abstract: When $\lambda$ is a partition, the specialized non-symmetric Macdonald polynomial $E_{\lambda}(x;q;0)$ is symmetric and related to a modified Hall--Littlewood polynomial. We show that whenever all parts of the integer partition $\lambda$ is a multiple of $n$, the underlying set of fillings exhibit the cyclic sieving phenomenon (CSP) under a cyclic shift of the columns. The corresponding CSP polynomial is given by $E_{\lambda}(x;q;0)$. In addition, we prove a refined cyclic sieving phenomenon where the content of the fillings is fixed. This refinement is closely related to an earlier result by B.~Rhoades. We also introduce a skew version of $E_{\lambda}(x;q;0)$. We show that these are symmetric and Schur-positive via a variant of the Robinson--Schenstedt--Knuth correspondence and we also describe crystal raising- and lowering operators for the underlying fillings. Moreover, we show that the skew specialized non-symmetric Macdonald polynomials are in some cases vertical-strip LLT polynomials. As a consequence, we get a combinatorial Schur expansion of a new family of LLT polynomials. Subjects: Combinatorics (math.CO); Representation Theory (math.RT) MSC classes: 05E10, 05E05 Cite as: arXiv:1908.00083 [math.CO] (or arXiv:1908.00083v1 [math.CO] for this version)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6091850399971008, "perplexity": 801.965806584299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663012542.85/warc/CC-MAIN-20220528031224-20220528061224-00761.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/jimo.2021038
# American Institute of Mathematical Sciences • Previous Article A novel separate chance-constrained programming model to design a sustainable medical ventilator supply chain network during the Covid-19 pandemic • JIMO Home • This Issue • Next Article An application of approximate dynamic programming in multi-period multi-product advertising budgeting doi: 10.3934/jimo.2021038 Online First Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the “Online First” tab for the selected journal. ## Designing and analysis of a Wi-Fi data offloading strategy catering for the preference of mobile users 1 Shanghai Jiao Tong University, 800 Dongchuan Rd, Shanghai, China 2 The Chinese University of Hong Kong(Shenzhen), 2001 Longxiang Boulevard, Longgang District, Shenzhen, China * Corresponding author: Xiaoyi Zhou Received  July 2020 Revised  December 2020 Early access March 2021 Citation: Xiaoyi Zhou, Tong Ye, Tony T. Lee. Designing and analysis of a Wi-Fi data offloading strategy catering for the preference of mobile users. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2021038 ##### References: show all references ##### References: Transition of wireless channel states in urban areas State transition of the data transmission Relationship between two kinds of embedded points The $m$th frame starts service in the cellular state, while the last event is (a) the $(m-1)$th frame starts its service in the cellular state, or (b) the service state transits to cellular state when the $(m-1)$th frame is in service The service time when a frame starts its service in the deferred state Waiting time of the newly-arrived frame Delay and efficiency performance in the M/MMSP/1 queueing system Utility vs. deadline in the M/MMSP/1 queueing system Utility $U$ vs. preference weight $a$ in the M/MMSP/1 queueing system Utility $U$ vs. preference weight $a$ when the duration of channel states $C$ and $F$ follow the truncated Pareto distribution Utility $U$ vs. preference weight $a$ when the data frame size is dual-fixed Utility $U$ vs. preference weight $a$ when the data rate of each Wi-Fi hotspot is different State transition diagram of the two-dimensional Markov chain Parameters employed in the performance study Parameter Value Mean duration of channel state $C$ 28.42s Mean duration of channel state $F$ 12.57s Data rate of cellular network 8.7Mbps Data rate of Wi-Fi hotspots 24.4Mbps Mean frame size 8.184Kb Arrival rate of data frames 800 frames/s Parameter Value Mean duration of channel state $C$ 28.42s Mean duration of channel state $F$ 12.57s Data rate of cellular network 8.7Mbps Data rate of Wi-Fi hotspots 24.4Mbps Mean frame size 8.184Kb Arrival rate of data frames 800 frames/s [1] Badal Joshi. A detailed balanced reaction network is sufficient but not necessary for its Markov chain to be detailed balanced. Discrete & Continuous Dynamical Systems - B, 2015, 20 (4) : 1077-1105. doi: 10.3934/dcdsb.2015.20.1077 [2] Olli-Pekka Tossavainen, Daniel B. Work. Markov Chain Monte Carlo based inverse modeling of traffic flows using GPS data. Networks & Heterogeneous Media, 2013, 8 (3) : 803-824. doi: 10.3934/nhm.2013.8.803 [3] Samuel N. Cohen, Lukasz Szpruch. On Markovian solutions to Markov Chain BSDEs. Numerical Algebra, Control & Optimization, 2012, 2 (2) : 257-269. doi: 10.3934/naco.2012.2.257 [4] Ying Sue Huang, Chai Wah Wu. Stability of cellular neural network with small delays. Conference Publications, 2005, 2005 (Special) : 420-426. doi: 10.3934/proc.2005.2005.420 [5] Marcelo Sobottka. Right-permutative cellular automata on topological Markov chains. Discrete & Continuous Dynamical Systems, 2008, 20 (4) : 1095-1109. doi: 10.3934/dcds.2008.20.1095 [6] Ajay Jasra, Kody J. H. Law, Yaxian Xu. Markov chain simulation for multilevel Monte Carlo. Foundations of Data Science, 2021, 3 (1) : 27-47. doi: 10.3934/fods.2021004 [7] Jian Liu, Xin Wu, Jiang-Ling Lei. The combined impacts of consumer green preference and fairness concern on the decision of three-party supply chain. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021090 [8] Liping Zhang. A nonlinear complementarity model for supply chain network equilibrium. Journal of Industrial & Management Optimization, 2007, 3 (4) : 727-737. doi: 10.3934/jimo.2007.3.727 [9] Jia Shu, Jie Sun. Designing the distribution network for an integrated supply chain. Journal of Industrial & Management Optimization, 2006, 2 (3) : 339-349. doi: 10.3934/jimo.2006.2.339 [10] Jingzhi Tie, Qing Zhang. An optimal mean-reversion trading rule under a Markov chain model. Mathematical Control & Related Fields, 2016, 6 (3) : 467-488. doi: 10.3934/mcrf.2016012 [11] Ralf Banisch, Carsten Hartmann. A sparse Markov chain approximation of LQ-type stochastic control problems. Mathematical Control & Related Fields, 2016, 6 (3) : 363-389. doi: 10.3934/mcrf.2016007 [12] Kun Fan, Yang Shen, Tak Kuen Siu, Rongming Wang. On a Markov chain approximation method for option pricing with regime switching. Journal of Industrial & Management Optimization, 2016, 12 (2) : 529-541. doi: 10.3934/jimo.2016.12.529 [13] Amin Aalaei, Hamid Davoudpour. Two bounds for integrating the virtual dynamic cellular manufacturing problem into supply chain management. Journal of Industrial & Management Optimization, 2016, 12 (3) : 907-930. doi: 10.3934/jimo.2016.12.907 [14] Liu Hui, Lin Zhi, Waqas Ahmad. Network(graph) data research in the coordinate system. Mathematical Foundations of Computing, 2018, 1 (1) : 1-10. doi: 10.3934/mfc.2018001 [15] Qinglei Zhang, Wenying Feng. Detecting coalition attacks in online advertising: A hybrid data mining approach. Big Data & Information Analytics, 2016, 1 (2&3) : 227-245. doi: 10.3934/bdia.2016006 [16] Ashkan Mohsenzadeh Ledari, Alireza Arshadi Khamseh, Mohammad Mohammadi. A three echelon revenue oriented green supply chain network design. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 157-168. doi: 10.3934/naco.2018009 [17] Lin Xu, Rongming Wang. Upper bounds for ruin probabilities in an autoregressive risk model with a Markov chain interest rate. Journal of Industrial & Management Optimization, 2006, 2 (2) : 165-175. doi: 10.3934/jimo.2006.2.165 [18] Xi Zhu, Meixia Li, Chunfa Li. Consensus in discrete-time multi-agent systems with uncertain topologies and random delays governed by a Markov chain. Discrete & Continuous Dynamical Systems - B, 2020, 25 (12) : 4535-4551. doi: 10.3934/dcdsb.2020111 [19] Kazuhiko Kuraya, Hiroyuki Masuyama, Shoji Kasahara. Load distribution performance of super-node based peer-to-peer communication networks: A nonstationary Markov chain approach. Numerical Algebra, Control & Optimization, 2011, 1 (4) : 593-610. doi: 10.3934/naco.2011.1.593 [20] Ralf Banisch, Carsten Hartmann. Addendum to "A sparse Markov chain approximation of LQ-type stochastic control problems". Mathematical Control & Related Fields, 2017, 7 (4) : 623-623. doi: 10.3934/mcrf.2017023 2020 Impact Factor: 1.801
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38191333413124084, "perplexity": 8516.50017949997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301737.47/warc/CC-MAIN-20220120100127-20220120130127-00026.warc.gz"}
https://sshwy.name/2019/06/13380/
1. . 2. . 3. . # 同余方程与离散对数 a a a a 1 16 5 5 9 2 13 4 2 14 6 15 10 3 14 9 3 1 7 11 11 7 15 6 4 12 8 10 12 13 16 8 ,即 . 转载请注明: Sshwy's Blog 原根与模算术 上一篇 [NOIP2012] 开车旅行 2019.06.08 目录
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.450400173664093, "perplexity": 379.2561214516536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894203.73/warc/CC-MAIN-20201027140911-20201027170911-00006.warc.gz"}
https://jeopardylabs.com/play/algebra-jeopardy6
Systems Special Systems Linear Inequalities Exponents and polynomials Potpourri ### 100 The solution of each system y=5x-10 y=3x+8 (Solve by substitution or elimination) What is (9,35)? ### 100 A system that has exactly one solution. The graph of this system consists of 2 intersecting lines. What is an independent system? ### 100 The ordered pair (7,3) is or is not a solution to the inequality y What is a solution? ### 100 The solution of this problem when simplified 8 to the zero power What is 1? ### 100 Scientific notation is a method of writing numbers that are very large or very small. The second part of scientific notation is written to this power. What is 10? ### 200 The solution to each system 2x+y=2 -4x+4y=12 What is (-1,2)? ### 200 When graphing 2 linear equations with the same slope but different y-intercepts, the lines are intersecting, coincident, or parallel... What is parallel? ### 200 When the inequality is written y< or y>, the points on the boundary line are not solutions of the inequality. The line on the graph is (solid, dashed, there is no boundary line) What is dashed? Simplify a^-7b^2 What is b^2/a^7? ### 200 The coefficient of the first term of a polynomial in standard form. What is the leading coefficeint? ### 300 The solution to each system 3x+2y=6 -x+y=-2 What is (2,0)? ### 300 Look at the slope and y-intercept of the following equations: y=2x-2 y=x+1 Are these two lines parallel, intersecting or coincident (the same line) What is intersecting? ### 300 When the inequality is written as y is less than or equal to, the points (above, below) the line are solutions of the inequality. What is below the boundary line? ### 300 When dividing two numbers that have exponents in them, you (add, subtract, multiply) the exponents. What is subtract? ### 300 The degree of the term of the polynomial with the greatest degree. What is the degree of a polynomial? ### 400 The solution to each system 3x-y=-2 -2x+y=3 What is (1,5)? ### 400 The slopes and y-intercepts of coincident lines are... (same, different, neither of these answers) What is the same? ### 400 When two linear inequalities are graphed, the coordinates in the (unshaded area, overlapping shaded area) are solutions to the systems. What is overlapping shaded area? ### 400 The simplification of this equation of poynomials 4x^3+ 8x^2+2x+3x^3+x^2 +4x= What is 7x^ + 9x^2 + 6x? ### 400 The product of any number and a whole number. What is a multiple? ### 500 A method used to solve systems of equations by solving an equation for one variable and substituting the resulting expression into the other equation. What is substitution? ### 500 The number of solutions to the following systems: y=-2x+4 2x+y=4 What is an infinite amount of solutions? ### 500 If a boundary line is solid, the points on the boundary lines (are or are not) solutions. What is are solutions? ### 500 When writing a polynomial equation in standard form, the exponents of the variables are written in (ascending,descending) order. What is descending? ### 500 A number that is multiplied by another number to get a product. What is a factor?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363250136375427, "perplexity": 2249.022299458928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860570.57/warc/CC-MAIN-20180618144750-20180618164750-00216.warc.gz"}
http://e.math.hr/events/180409seminar-za-teoriju-reprezentacija-zci-quantixlie
# Seminar za teoriju reprezentacija ZCI QuantiXLie lokacija: PMF Matematički odsjek vrijeme: 10.04.2018 - 17:45 - 19:00 Representation theory seminar and Center of Excellence QuantiXLie Tuesday, April 10 at 17:45, room 002 Vít Tuček: Unitarizable highest weight modules II Unitarizable highest weight modules for a real Lie group G were classified in the eighties independently by several authors. They exists only when the pair (G, K), where K is maximal compact in G, is the so called Hermitian symmetric pair. They precisely correspond to parabolic subgroups of the complexification of G with Abelian nilradical. I will present the classification from two viewpoints and the Enright formula for nilpotent cohomology of these modules which generalizes the Kostant formula for G-dominant integral weights. Everybody is invited. Karmen Grizelj
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9218480587005615, "perplexity": 6065.139554452408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738015.38/warc/CC-MAIN-20200808165417-20200808195417-00437.warc.gz"}
http://blog.bigsmoke.us/2012/07/13/apache-mod_proxy-configuration-for-the-pirate-bay
# Apache mod_proxy configuration for The Pirate Bay I found several apache mod_proxy configs for setting up a proxy for The Pirate Bay, but none worked fully. You need to enable/install: • mod_proxy • mod_rewrite • mod_proxy_http <Virtualhost *:80> ServerName tpb.yourdomain.com # Plausible deniability, and respecting your fellow pirate's privacy. Loglevel emerg CustomLog /dev/null combined ErrorLog /dev/null <Proxy *> Order deny,allow Allow from all </Proxy> # Just to fix a few links... RewriteEngine On RewriteRule \/static\.thepiratebay\.se\/(.*)$/static/$1 [R=302,L] ProxyRequests off # Cookies are imporant to be able to disable the annoying double-row mode. # The . before the domain is required, but I don't know why :) ProxyPass / http://thepiratebay.se/ ProxyPass /static/ http://static.thepiratebay.se/ ProxyPass /torrents/ http://torrents.thepiratebay.se/ ProxyHTMLURLMap http://thepiratebay.se / ProxyHTMLURLMap http://([a-z]*).thepiratebay.se /$1 R ProxyHTMLEnable On <Location /static/> ProxyPassReverse / SetOutputFilter proxy-html ProxyHTMLURLMap / /static/ RequestHeader unset Accept-Encoding </Location> <Location /torrents/> ProxyPassReverse / SetOutputFilter proxy-html ProxyHTMLURLMap / /torrents/ RequestHeader unset Accept-Encoding </Location> </Virtualhost> ## 7 Comments( Add comment / trackback ) 1. Comment by Rowan Rodrik On July 22, 2012 at 16:43 Did you notice that Ziggo’s PB “blockage” is done on the DNS level? Here’s an excerpt from my /etc/hosts which undoes BREIN’s pathetic attempt at censorship: $ grep pirate /etc/hosts 178.73.210.219 thepiratebay.se 178.73.210.219 www.thepiratebay.se 178.73.210.219 thepiratebay.org 178.73.210.219 www.thepiratebay.org Here it is explained for the DNS-illiterates who do not know what a hosts file is and for whom that file lives at the unholy c:\windows\system32\drivers\etc\hosts instead of /etc/hosts. 2. Comment by halfgaar On July 25, 2012 at 09:22 I thought the court order required them to also block a list of IP’s. I wonder why they chose not to do that. Perhaps their infrastructure doesn’t make it easy, or they wanted to give customers an easy way out… Anyway, I use KPN, so I need my proxy. 3. Comment by Rowan Rodrik On July 26, 2012 at 13:28 KPN does block the IPs? 4. Comment by halfgaar On July 27, 2012 at 08:20 When I visit thepiratebay.se, they say they block this list of IPs: 194.71.107.15 194.71.107.19 194.71.107.80 194.71.107.81 194.71.107.82 194.71.107.83 I don’t know what the IP is you posted, but maybe it’s a proxy and the hosts file makes it available under the normal name. There is one easy check: can you change column view to single row per torrent? That setting is stored in a cookie and most proxies mess that up. 5. Comment by Rowan Rodrik On July 27, 2012 at 11:05 Haha, indeed the cookie doesn’t work, so it must be a proxy then. And there I was thinking that Ziggo were being amateurs (or plain naughty), while I was being an amateur myself. 😉 6. Comment by the rebel On July 27, 2014 at 23:55 easly get a working tpb proxy at http://tpbnet.org/ , just click on the Banner. If your ISP blocked other torrent sites, you can try : http://unblock.pro/ Cheers 7. Comment by Rowan Rodrik On June 28, 2018 at 11:20 Wiebe and I just spent a good hour on debugging why our ProxyHTMLURLMap didn’t do anything. Well, that was because Apache 2.4 introduced an on/off switch for the feature: ProxyHTMLEnable. Unsurprisingly, the relationship between these two directives wasn’t documented in the mod_proxy_html (neither at the ProxyHTMLURLMap section, nor the ProxyHTMLEnable). Where do we report documentation bugs?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33635756373405457, "perplexity": 8224.440834341161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159160.59/warc/CC-MAIN-20180923055928-20180923080328-00256.warc.gz"}
https://docs.fintechos.com/InnovationCore/20.1.3/UserGuide/Content/DEVOPS/ConfigureUploadEBS.htm
# Configure the File Upload Folder When building a web application that requires users to upload or download files (documents, images, etc.), file storage can be an important aspect of the application architecture. ## Where Should I Store Files? FintechOS platform supports multiple storage providers for storing the uploaded or generated user files. When building web applications using FintechOS technology, you’ve got a few choices for where to store your files: The local file system refers to either a local path on the application server or a shared folder on the network containing the application server. While it is the default storage provider, you might be running out of disk space or you might find it a very challenging task to ensure that files are properly backed up and available at all times. If you’ll be storing large blobs of content, you might want to consider one of the other options. Storing files in a file storage service like Amazon S3 Buckets or Azure Blob is a great option if you’ll be storing large blobs of content. Not only you stay rest assured that your data is replicated and backed up, but they also ensure scalability and high availability. This section walks you through the steps needed to configure the "UploadEbs" storage provider /location as needed. ## Local File System Storage There are no special configurations that have to be made in order to use it other than setting the name of the root folder. To set the name of the root folder, go to the web.config file, open it and to the appSettings node, add the application setting UploadFolder, as described below: <configuration> ... <appSettings> ... </appSettings> </configuration> Depending on where the root folder resides, make sure that you properly set the value of the UploadFolder setting: • subfolder of the application folder: "~/path/to/uploadfolder/"; • local folder on application server, the full path to local folder, like: "c:\path\to\uploadfolder" NOTE If in the web.config file you do not set the UploadFolder setting, it is automatically set to the default value, that is, "~/UploadEBS/". ### Automatically Create File Upload Subfolders IMPORTANT! This feature is available only for local file system storage. It is not available for Azure Blob Storage or Amazon S3 Buckets Storage. You can automatically group uploaded files into folders based on the last three characters in their file name (excluding the file extension). To do so, add an feature-uploadfolder-autocreate-subfolders key with a value of 1 in the web.config file: <add key="feature-uploadfolder-autocreate-subfolders" value="1"> This will save each uploaded file in a -.files\xyz subfolder of the upload folder, where xyz represents the last three characters of the file name. For example, a file called MyDoc_0caf99b6-549d-48f7-8747-5e3eb82753fd.txt will be saved in a folder structure similar to: ...\ -.files\ 3fd\ MyDoc_0caf99b6-549d-48f7-8747-5e3eb82753fd.txt Setting the feature-uploadfolder-autocreate-subfolders key value to 0 disables the feature. This feature is backward compatible. If a requested file is not stored in the above folder structure, it will be read from the main upload folder or the entity specific upload folder respectively. ## Azure Blob Storage To configure FintechOS to store user files in Azure Blob, follow these steps: 1. Go to the web.config file and open it. 2. Add a ftosStorageService section to the <configSections> element: 3. <configuration> <configSections> ... <section name="ftosStorageService" type="EBS.Core.Utils.Services.Config.StorageServiceConfigSection, EBS.Core.Utils"/> </configSections> <configuration> 4. Add a ftosStorageService section (note the AzureBlob type) as child of <configuration> element: 5. <configuration> ... <ftosStorageService type="AzureBlob"> <settings> <setting name="connectionString" value="connection_string"/> <setting name="rootContainer" value="root_container"/> </settings> </ftosStorageService> </configuration> where: • connectionString is the connection string FintechOS is using to connect to an Azure Blob container; • rootContainer is the root container name where the user files will be stored. ### Azure Resource Manager templates support To enable automatic deployment through ARM templates, the connectionString and rootContainer settings must be configured in the <appSettings> element of the web.config file: <appSettings> ... </appSettings> IMPORTANT! Values set in the <appSettings> keys take precedence over the values set in the <ftosStorageService> settings node. ## Amazon S3 Buckets Storage To configure FintechOS to store user files in Amazon S3 Buckets, follow these steps: 1. Go to the web.config file and open it. 2. To the <configSections> element, add the following two sections: ftosStorageService and aws, as described below: 3. <configuration> <configSections> ... <section name="ftosStorageService" type="EBS.Core.Utils.Services.Config.StorageServiceConfigSection, EBS.Core.Utils"/> <section name="aws" type="Amazon.AWSSection, AWSSDK.Core"/> </configSections> <configuration> 4. Add <ftosStorageService> tag (note the AmazonS3Bucket type) as child of configuration element: 5. <configuration> ... <ftosStorageService type="AmazonS3Bucket"> <settings> <setting name="AWSAccessKey" value="access_key" /> <setting name="AWSSecretKey" value="secret_key" /> <setting name="BucketName" value="bucket_name"/> </settings> </ftosStorageService> </configuration> where: AWSAccessKey and AWSSecretKey are used by FTOS to sign the requests made to AWS. For more information, see Access Keys (Access Key ID and Secret Access Key). BucketName is the root bucket name where the user files will be stored. 6. Add the aws section as child of the configuration element: 7. <configuration> ... <aws region="aws_region"> </aws> </configuration> NOTE The only required attribute is region. For a complete list of available regions, see Amazon documentation, section Regions, Availability Zones, and Local Zones. The region attribute must have one of the values from the column "Region". E.g.: <aws region="eu-central-1"></aws> For a list of allowed elements in the AWS section, see Configuration Files Reference for AWS SDK for .NET.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1870037168264389, "perplexity": 6095.449890958378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00616.warc.gz"}
http://support.sas.com/documentation/cdl/en/statug/67523/HTML/default/statug_genmod_details20.htm
# The GENMOD Procedure ### F Statistics Suppose that is the deviance resulting from fitting a generalized linear model and that is the deviance from fitting a submodel. Then, under appropriate regularity conditions, the asymptotic distribution of is chi-square with r degrees of freedom, where r is the difference in the number of parameters between the two models and is the dispersion parameter. If is unknown, and is an estimate of based on the deviance or Pearson’s chi-square divided by degrees of freedom, then, under regularity conditions, has an asymptotic chi-square distribution with degrees of freedom. Here, n is the number of observations and p is the number of parameters in the model that is used to estimate . Thus, the asymptotic distribution of is the F distribution with r and degrees of freedom, assuming that and are approximately independent. This F statistic is computed for the Type 1 analysis, Type 3 analysis, and hypothesis tests specified in CONTRAST statements when the dispersion parameter is estimated by either the deviance or Pearson’s chi-square divided by degrees of freedom, as specified by the DSCALE or PSCALE option in the MODEL statement. In the case of a Type 1 analysis, model 0 is the higher-order model obtained by including one additional effect in model 1. For a Type 3 analysis and hypothesis tests, model 0 is the full specified model and model 1 is the submodel obtained from constraining the Type III contrast or the user-specified contrast to be 0.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475160241127014, "perplexity": 429.1220852678693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891105.83/warc/CC-MAIN-20180122054202-20180122074202-00586.warc.gz"}
http://ia.cr/cryptodb/data/author.php?authorkey=4148
## CryptoDB ### Xiutao Feng #### Publications Year Venue Title 2017 TOSC Many block ciphers use permutations defined over the finite field F22k with low differential uniformity, high nonlinearity, and high algebraic degree to provide confusion. Due to the lack of knowledge about the existence of almost perfect nonlinear (APN) permutations over F22k, which have lowest possible differential uniformity, when k &gt; 3, constructions of differentially 4-uniform permutations are usually considered. However, it is also very difficult to construct such permutations together with high nonlinearity; there are very few known families of such functions, which can have the best known nonlinearity and a high algebraic degree. At Crypto’16, Perrin et al. introduced a structure named butterfly, which leads to permutations over F22k with differential uniformity at most 4 and very high algebraic degree when k is odd. It is posed as an open problem in Perrin et al.’s paper and solved by Canteaut et al. that the nonlinearity is equal to 22k−1−2k. In this paper, we extend Perrin et al.’s work and study the functions constructed from butterflies with exponent e = 2i + 1. It turns out that these functions over F22k with odd k have differential uniformity at most 4 and algebraic degree k +1. Moreover, we prove that for any integer i and odd k such that gcd(i, k) = 1, the nonlinearity equality holds, which also gives another solution to the open problem proposed by Perrin et al. This greatly expands the list of differentially 4-uniform permutations with good nonlinearity and hence provides more candidates for the design of block ciphers. 2011 FSE 2010 ASIACRYPT #### Coauthors Dengguo Feng (1) Shihui Fu (1) Jun Liu (1) Chuankun Wu (2) Baofeng Wu (1) Chunfang Zhou (1) Zhaocun Zhou (1)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329655170440674, "perplexity": 1135.4568121412096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00292.warc.gz"}
https://www.jiskha.com/questions/1429344/the-g-value-for-formation-of-gaseous-water-at-298-k-and-1-atm-is-278-kj-mol-what-is-the
# AP Chemistry The G value for formation of gaseous water at 298 K and 1 atm is -278 kJ/mol. What is the nature of the spontaneity of formation of gaseous water at these conditions? 1. 👍 2. 👎 3. 👁 1. dG is negative. The reaction is spontaneous. dG = -; rxn spontaneous dG = 0; rxn about 50/50 dG = +; rxn not spontaneous in the direction shown but is spontaneous for the reverse rxn. 1. 👍 2. 👎 ## Similar Questions 1. ### Chemistry Write a balanced equation for the combustion of gaseous ethylene (C2H4), an important natural plant hormone, in which it combines with gaseous oxygen to form gaseous carbon dioxide and gaseous water 2. ### Chemistry At 298 K, the Henry\'s law constant for oxygen is 0.00130 M/atm. Air is 21.0% oxygen.At 298 K, what is the solubility of oxygen in water exposed to air at 1.00 atm? At 298 K, what is the solubility of oxygen in water exposed to 3. ### Chemistry Write a balanced equation for the combustion of gaseous propane (C3H8), a minority component of natural gas, in which it combines with gaseous oxygen to form gaseous carbon dioxide and gaseous water. I answered C3H8 + 5O2 = 3CO2 + 4. ### Chemistry For a gaseous reaction, standard conditions are 298 K and a partial pressure of 1 atm for all species. For the reaction 2NO(g) + O2(g) --> 2NO2(g) the standard change in Gibbs free energy is ΔG° = -69.0 kJ/mol. What is ΔG for 1. ### chemistry The standard enthalpy of combustion of C2H6O(l) is -1,367 kJ mol-1 at 298 K. What is the standard enthalpy of formation of C2H6O(l) at 298 K? Give your answer in kJ mol-1, rounded to the nearest kilojoule. Do not include units as 2. ### Chemistry The ΔHof of gaseous dimethyl ether (CH3OCH3) is –185.4 kJ/mol; the vapour pressure is 1.00 atm at –23.7oC and 0.526 atm at –37.8oC. a) Calculate ΔHovap of dimethyl ether. b) Calculate ΔHof of liquid dimethyl ether Any 3. ### Chemistry 1202 3CH4(g)->C3H8(g)+2H2(g) Calculate change in G at 298 k if the reaction mixture consists of 41atm of CH4 , 0.010 atm of CH3, and 2.3×10−2 atm of H2. 4. ### Chemisty Calculate the Volume occupied by 1.5 moles of an ideal gas at 25 degrees Celsius and a pressure of 0.80 atm. (R= 0.08206 L atm/(mol*K). I've tried using the ideal gas law: PV=nRT but i can't seem to get where I am getting lost. 1. ### A.P Chemistry A rigid 5.00 L cylinder contains 24.5g of N2(g) and 28.0g of O2(g). (a). Calculate the total pressure, in atm, of the gas mixture in the cylinder at 298 K (b) The temperature of the gas mixture in the cylinder is decreased to 280 2. ### Chemistry 111 Gaseous ethane will react with gaseous oxygen to produce gaseous carbon dioxide and gaseous water . Suppose 17. g of ethane is mixed with 107. g of oxygen. Calculate the minimum mass of ethane that could be left over by the 3. ### Chemistry Write a balanced equation for the incomplete combustion of gaseous pentane (C5H12) which combines gaseous oxygen to form carbon dioxide, gaseous water and carbon monoxide. 4. ### Chemistry Fish breathe the dissolved air in water through their gills. Assuming the partial pressures of oxygen and nitrogen in the air are to be .20 atm and .80 atm respectively, calculate the mole fractions of oxygen and nitrogen in the
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896584510803223, "perplexity": 4436.404376268933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00371.warc.gz"}
http://pfam.xfam.org/family/ATP-sulfurylase?tab=alignBlock
Please note: this site relies heavily on the use of javascript. Without a javascript-enabled browser, this site will not function correctly. Please enable javascript and reload the page, or switch to a different browser. 36  structures 1032  species 2  interactions 1308  sequences 15  architectures # Summary: ATP-sulfurylase Pfam includes annotations and additional family information from a range of different sources. These sources can be accessed via the tabs below. This is the Wikipedia entry entitled "Sulfate adenylyltransferase". More... Identifiers EC number 2.7.7.4 CAS number 9012-39-9 Databases IntEnz IntEnz view BRENDA BRENDA entry ExPASy NiceZyme view KEGG KEGG entry MetaCyc metabolic pathway PRIAM profile PDB structures RCSB PDB PDBe PDBsum Gene Ontology AmiGO / EGO ATP-sulfurylase crystal structure of atp sulfurylase from thermus thermophillus hb8 in complex with aps Identifiers Symbol ATP-sulfurylase Pfam PF01747 InterPro IPR002650 SCOP 1i2d SUPERFAMILY 1i2d In enzymology, a sulfate adenylyltransferase (EC 2.7.7.4) is an enzyme that catalyzes the chemical reaction ATP + sulfate $\rightleftharpoons$ diphosphate + adenylyl sulfate Thus, the two substrates of this enzyme are ATP and sulfate, whereas its two products are diphosphate and adenylyl sulfate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is ATP:sulfate adenylyltransferase. Other names in common use include adenosine-5'-triphosphate sulfurylase, adenosinetriphosphate sulfurylase, adenylylsulfate pyrophosphorylase, ATP sulfurylase, ATP-sulfurylase, and sulfurylase. This enzyme participates in 3 metabolic pathways: purine metabolism, selenoamino acid metabolism, and sulfur metabolism. Some sulfate adenylyltransferases are part of a bifunctional polypeptide chain associated with adenosyl phosphosulfate (APS) kinase. Both enzymes are required for PAPS (phosphoadenosine-phosphosulfate) synthesis from inorganic sulfate.[1][2] ## Structural studies As of late 2007, 18 structures have been solved for this class of enzymes, with PDB accession codes 1G8F, 1G8G, 1G8H, 1I2D, 1J70, 1JEC, 1JED, 1JEE, 1JHD, 1M8P, 1R6X, 1TV6, 1V47, 1X6V, 1XJQ, 1XNJ, 1ZUN, and 2GKS. ## Applications ATP sulfurylase is one of the enzymes used in pyrosequencing. ## References 1. ^ Rosenthal E, Leustek T (November 1995). "A multifunctional Urechis caupo protein, PAPS synthetase, has both ATP sulfurylase and APS kinase activities". Gene 165 (2): 243–8. doi:10.1016/0378-1119(95)00450-K. PMID 8522184. 2. ^ Kurima K, Warman ML, Krishnan S, Domowicz M, Krueger RC, Deyrup A, Schwartz NB (July 1998). "A member of a family of sulfate-activating enzymes causes murine brachymorphism". Proc. Natl. Acad. Sci. U.S.A. 95 (15): 8681–5. doi:10.1073/pnas.95.15.8681. PMC 21136. PMID 9671738. • Bandurski RS, Wilson LG, Squires CL (1956). "The mechanism of "active sulfate" formation". J. Am. Chem. Soc. 78 (24): 6408–6409. doi:10.1021/ja01605a028. • Hilz H and Lipmann F (1955). "The enzymatic activation of sulfate". Proc. Natl. Acad. Sci. USA 41 (11): 880–890. doi:10.1073/pnas.41.11.880. • Venkatachalam KV, Akita H, Strott CA (1998). "Molecular cloning, expression, and characterization of human bifunctional 3'-phosphoadenosine 5'-phosphosulfate synthase and its functional domains". J. Biol. Chem. 273 (30): 19311–20. doi:10.1074/jbc.273.30.19311. PMID 9668121. This article incorporates text from the public domain Pfam and InterPro IPR002650 This tab holds the annotation information that is stored in the Pfam database. As we move to using Wikipedia as our main source of annotation, the contents of this tab will be gradually replaced by the Wikipedia tab. # ATP-sulfurylase This domain is the catalytic domain of ATP-sulfurylase or sulfate adenylyltransferase EC:2.7.7.4 some of which are part of a bifunctional polypeptide chain associated with adenosyl phosphosulphate (APS) kinase PF01583. Both enzymes are required for PAPS (phosphoadenosine-phosphosulfate) synthesis from inorganic sulphate [2]. ATP sulfurylase catalyses the synthesis of adenosine-phosphosulfate APS from ATP and inorganic sulphate [1]. ## Literature references 1. Kurima K, Warman ML, Krishnan S, Domowicz M, Krueger RC Jr, Deyrup A, Schwartz NB; , Proc Natl Acad Sci U S A 1998;95:8681-8685.: A member of a family of sulfate-activating enzymes causes murine brachymorphism [published erratum appears in Proc Natl Acad Sci U S A 1998 Sep 29;95(20):12071] PUBMED:9671738 EPMC:9671738 2. Rosenthal E, Leustek T; , Gene 1995;165:243-248.: A multifunctional Urechis caupo protein, PAPS synthetase, has both ATP sulfurylase and APS kinase activities. PUBMED:8522184 EPMC:8522184 This tab holds annotation information from the InterPro database. # InterPro entry IPR024951 This domain is the catalytic domain of ATP-sulfurylase or sulphate adenylyltransferase (EC). ATP-sulfurylase catalyses the synthesis of adenosine-phosphosulphate (APS) from ATP and inorganic sulphate [PUBMED:9671738]. Sometimes is found as part of a bifunctional polypeptide chain associated with adenylylsulphate kinase (INTERPRO). Both enzymes are required for PAPS (phosphoadenosine-phosphosulphate) synthesis from inorganic sulphate [PUBMED:8522184]. ### Gene Ontology The mapping between Pfam and Gene Ontology is provided by InterPro. If you use this data please cite InterPro. # Domain organisation Below is a listing of the unique domain organisations or architectures in which this domain is found. More... # Pfam Clan This family is a member of clan HUP (CL0039), which has the following description: The HUP class contains the HIGH-signature proteins, UspA superfamily and the PP-ATPase superfamily [1]. The HIGH superfamily has the HIGH Nucleotidyl transferases and the class I tRNA synthetases both of which have the HIGH and the KMSKS motif [1],[2]. The PP-loop ATPase named after the ATP PyroPhosphatase domain, was initially identified as a conserved amino acid sequence motif in four distinct groups of enzymes that catalyse the hydrolysis of the alpha-beta phosphate bond of ATP, namely GMP synthetases, argininosuccinate synthetases, asparagine synthetases, and ATP sulfurylases [3]. The USPA superfamily contains USPA, ETFP and Photolyases [1] The clan contains the following 26 members: ATP-sulfurylase # Alignments We store a range of different sequence alignments for families. As well as the seed alignment from which the family is built, we provide the full alignment, generated by searching the sequence database using the family HMM. We also generate alignments using four representative proteomes (RP) sets, the NCBI sequence database, and our metagenomics sequence database. More... ## View options We make a range of alignments for each Pfam-A family. You can see a description of each above. You can view these alignments in various ways but please note that some types of alignment are never generated while others may not be available for all families, most commonly because the alignments are too large to handle. Seed (113) Full (1308) Representative proteomes NCBI (1170) Meta (549) RP15 (179) RP35 (315) RP55 (435) RP75 (520) Jalview View  View  View  View  View  View  View  View HTML View  View  View  View  View  View PP/heatmap 1 View  View  View  View  View Pfam viewer View  View 1Cannot generate PP/Heatmap alignments for seeds; no PP data available Key: available, not generated, not available. ## Format an alignment Seed (113) Full (1308) Representative proteomes NCBI (1170) Meta (549) RP15 (179) RP35 (315) RP55 (435) RP75 (520) Alignment: Format: Order: Sequence: Gaps: We make all of our alignments available in Stockholm format. You can download them here as raw, plain text files or as gzip-compressed files. Seed (113) Full (1308) Representative proteomes NCBI (1170) Meta (549) RP15 (179) RP35 (315) RP55 (435) RP75 (520) You can also download a FASTA format file containing the full-length sequences for all sequences in the full alignment. MyHits provides a collection of tools to handle multiple sequence alignments. For example, one can refine a seed alignment (sequence addition or removal, re-alignment or manual edition) and then search databases for remote homologs using HMMER3. # HMM logo HMM logos is one way of visualising profile HMMs. Logos provide a quick overview of the properties of an HMM in a graphical form. You can see a more detailed description of HMM logos and find out how you can interpret them here. More... # Trees This page displays the phylogenetic tree for this family's seed alignment. We use FastTree to calculate neighbour join trees with a local bootstrap based on 100 resamples (shown next to the tree nodes). FastTree calculates approximately-maximum-likelihood phylogenetic trees from our seed alignment. # Curation and family details This section shows the detailed information about the Pfam family. You can see the definitions of many of the terms in this section in the glossary and a fuller explanation of the scoring system that we use in the scores section of the help pages. ## Curation Seed source: Pfam-B_494 (release 4.2) Previous IDs: none Type: Domain Author: Bashton M, Bateman A Number in seed: 113 Number in full: 1308 Average length of the domain: 214.80 aa Average identity of full alignment: 37 % Average coverage of the sequence by the domain: 43.51 % ## HMM information HMM build commands: build method: hmmbuild -o /dev/null HMM SEED search method: hmmsearch -Z 23193494 -E 1000 --cpu 4 HMM pfamseq Model details: Parameter Sequence Domain Gathering cut-off 23.4 23.4 Trusted cut-off 24.4 23.9 Noise cut-off 22.8 22.8 Model length: 215 Family (HMM) version: 12 # Species distribution ### Sunburst controls Show This visualisation provides a simple graphical representation of the distribution of this family across species. You can find the original interactive tree in the adjacent tab. More... ### Tree controls Hide The tree shows the occurrence of this domain across different species. More...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5395677089691162, "perplexity": 23897.793833034102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928865.24/warc/CC-MAIN-20150521113208-00176-ip-10-180-206-219.ec2.internal.warc.gz"}
http://cpr-quantph.blogspot.com/2013/06/13063991-jonathan-welch-et-al.html
## Efficient Quantum Circuits for Diagonal Unitaries Without Ancillas    [PDF] Jonathan Welch, Daniel Greenbaum, Sarah Mostame, Alán Aspuru-Guzik The accurate evaluation of diagonal unitary operators is often the most resource-intensive element of quantum algorithms such as real-space quantum simulation and Grover search. Efficient circuits have been demonstrated in some cases but generally require ancilla registers, which can dominate the qubit resources. In this paper, we point out a correspondence between Walsh functions and a basis for diagonal operators that gives a simple way to construct efficient circuits for diagonal unitaries without ancillas. This correspondence reduces the problem of constructing the minimal-depth circuit within a given error tolerance, for an arbitrary diagonal unitary $e^{if(\hat{x})}$ in the $|x>$ basis, to that of finding the minimal-length Walsh-series approximation to the function $f(x)$. We apply this approach to the quantum simulation of the classical Eckart barrier problem of quantum chemistry, demonstrating that high-fidelity quantum simulations can be achieved with few qubits and low depth. View original: http://arxiv.org/abs/1306.3991
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962814211845398, "perplexity": 1212.762897035175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945552.45/warc/CC-MAIN-20180422080558-20180422100558-00557.warc.gz"}
http://math.stackexchange.com/questions/289560/how-does-one-prove-a-b-c-%e2%8a%86-a-c-b-c
How does one prove (A - B) - C ⊆ (A - C) - (B - C) When proving this I'm not sure how to 'take out' the C on the RHS of the equation. The LHS is (x ∈ A) ∧ !(x ∈ B) ∧ !(x ∈ C) The RHS is (x ∈ A) ∧ !(x ∈ C) ∧ !(x ∈ B) ∧ !(x ∈ C) How does how prove LHS is a subset of RHS? - Take an element in the LHS and prove it must be in the RHS. –  Patrick Li Jan 29 '13 at 6:34 Just show that each $x\in(A\setminus B)\setminus C$ belongs to $(A\setminus C)\setminus(B\setminus C)$. Suppose that $x\in(A\setminus B)\setminus C$; then $x\in A\setminus B$, and $x\notin C$. Since $x\in A\setminus B$, we know further that $x\in A$ and $x\notin B$. Now put the pieces back together. First, $x\in A$ and $x\notin C$, so $x\in A\setminus C$. Moreover, $x\notin B$, so certainly $x\notin B\setminus C$, since $B\setminus C$ is a subset of $B$. But that means that $x\in A\setminus C$ and $x\notin B\setminus C$, which is exactly what’s required to say that $x\in(A\setminus C)\setminus(B\setminus C)$. Since $x$ was an arbitrary element of $(A\setminus B)\setminus C$, this shows that every element of $(A\setminus B)\setminus C$ belongs to $(A\setminus C)\setminus(B\setminus C)$ and hence that $(A\setminus B)\setminus C\subseteq(A\setminus C)\setminus(B\setminus C)$. (I call this approach element-chasing. It’s one of the most straightforward ways to prove that one set is a subset of another.) - Note that according to set theory Theorems, we have $A-B=A\cap B'$ where in $B'$ is a complement of $B$ recpect to our universal set $U$. So we have then: $$D=(A-B)-C=(A\cap B')\cap C'$$ so if $x\in D$ then $x\in A\cap B'$ and $x\in C'$ then $x\in A, x\in B', x\in C'$ so $$x\in A,x\in C'\longrightarrow x\in(A-C)\\x\in B', x\in C\longrightarrow x\in B',x\notin C\longrightarrow x\in(B'\cup C)\longrightarrow x\in(B\cap C')'$$ therfore $x\in(A-C)$ and $x\in(B\cap C')'$ which leads us to $$x\in(A-C)\cap (B\cap C')'$$. THis is wht you are looking for. - Nicely done, Babak! +1 –  amWhy Jan 29 '13 at 15:57 @amWhy: :-) ... –  Babak S. Jan 29 '13 at 19:23 Note that $(A\setminus C)\setminus (B\setminus C)$ is obtained by removing from $A\setminus C$ the part that is in $B\setminus C$. So we are removing a set that is a subset of $B$. It follows that $(A\setminus C)\setminus (B\setminus C)\subseteq (A\setminus C)\setminus B$. But $(A\setminus C)\setminus B= (A\setminus B)\setminus C$, since each is obtained by removing from $A$ the part of $A$ that is in $B$ or $C$. - The LHS is as you have written $$x \in A \land x \notin B \land x \notin C \tag{1}.$$ However, your RHS is wrong, it should be $$x \in A \land x \notin C \land \neg (x \in B \land x \notin C)$$ which is equivalent to $$x \in A \land x \notin C \land (x \notin B \lor x \in C).$$ Still, if you look closely, then you can see that we have $x \notin C$ there, so $(x \notin B \lor x \in C)$ simplifies to $x \notin B$. Finally we have $$x \in A \land x \notin C \land x \notin B \tag{2}$$ which is the same as (1) because commutativity of conjunction. Note that we in fact proved equality $$(A-B)-C = (A-C)-(B-C).$$ I hope this helps ;-) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9737640619277954, "perplexity": 98.85111892994367}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823634.2/warc/CC-MAIN-20140820021343-00077-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.newportaesthetics.com/pages/ccca1f-anchoring-and-adjustment-heuristic-example
# anchoring and adjustment heuristic example Choose from 35 different sets of Anchoring and Adjustment Heuristic flashcards on Quizlet. Learn Anchoring and Adjustment Heuristic with free interactive flashcards. Anchoring Heuristic. Keywords: bounded rationality; heuristics; cognitive biases; probabilistic reasoning;anchoring-and-adjustment;rationalprocessmodels ... and sample size but are influenced by irrelevant factors such as the ease of imagining an ... ity of anchoring-and-adjustment hinges on the question whether adjustment is a rational process. Advertising probably provides the best examples of anchoring you might know. Anchoring is so ubiquitous that it is thought to be a driving force behind a number of other biases and heuristics. Anchoring and Adjustment Heuristic in Option Pricing1 Hammad Siddiqi2 The University of Queensland [email protected] This Version: December 2015 Based on experimental and anecdotal evidence, an anchoring-adjusted option pricing model is developed in which the volatility of the underlying stock return is used as a starting point that gets Anchoring and adjustment heuristic [edit | edit source] Anchoring and adjustment is a psychological heuristic that influences the way people intuitively assess probabilities. To succeed in social interactions, people must gauge how … 7 This heuristic describes how, when estimating a certain value, we tend to give an initial value, then adjust it by increasing or decreasing our estimation. Examples 1 Ch 7 Anchoring Bias, Framing Effect, Confirmation Bias, Availability Heuristic, & Representative Heuristic Anchoring Anchoring is a cognitive bias that describes the common human tendency to rely too heavily on the first piece of information offered (the "anchor") when making decisions. Representativeness Heuristic . According to this heuristic, we start with a reference point (or anchor) and then make adjustments to that reference point based on additional information in order to reach our estimate or choice. As a consequence, the anchoring and adjustment heuristic is often touted as robust and persistent (Chapman & Johnson, 2002). "People make estimates by starting from an initial value that is adjusted to yield the final answer," explained Amos Tversky and Daniel Kahneman in a 1974 paper. anchoring-and-adjustment heuristic). Anchoring and adjustment heuristic Opens in new window involves making a judgment by starting from some initial point and then adjusting to yield a final decision. The students were exhibiting a psychological heuristic known as anchoring and adjustment. Abstract: The anchoring-and-adjustment heuristic has been studied in numerous experimental settings and is increasingly drawn upon to explain systematically biased decisions in economic areas as diverse as auctions, real estate pricing, sports betting and forecasting. Consider this anchoring bias example from Harvard Business School and Harvard Law School faculty member Guhan Subramanian. Anchoring and adjustment is a psychological heuristic that influences the way people intuitively assess probabilities. This constitutes a significant shortcoming be-cause one cannot fully understand subadditivity, perspective taking, preference reversals, or any of the other phenomena Availability heuristic 3. The anchoring effect is a cognitive bias that influences you to rely too heavily on the first piece of information you receive. This phenomenon is called anchoring. In this instance, the number posted on the speed limit sign serves as the initial anchor—the arbitrary starting point—in the driver’s mind. The availability heuristic is when you make a judgment about something based on how available examples are in your mind. Instead, anchoring effects observed in the standard paradigm appear to be produced by the increased accessibility of … 1. The third type of heuristic put forth by Kahneman and Tversky in their initial paper on the topic is the anchoring and adjustment heuristic. This video comes from a complete social psychology course created in 2015 for Udemy.com. That first piece of information is the anchor and sets the tone for everything that follows. Thus, after 30 years of research on the anchoring-and-ad-justment heuristic, it remains unclear why adjustments tend to be insufficient. And it’s not just a factor between the generations. One example of these is the planning fallacy, a bias that describes how we tend to underestimate the time we’ll need to finish a task, as well as the costs of doing so. Anchoring is a cognitive bias where a specific piece of information is relied upon to make a decision. The anchoring and adjustment heuristic causes people us to rely too heavily on the initial piece of information offered (the “anchor”) when making decisions. The anchoring effect is a cognitive bias that describes the common human tendency to rely too heavily on the first piece of information offered (the “anchor”) when making decisions. People tend to unconsciously latch onto the first fact they hear, basing their decision-making on that fact. For example, a police officer pulls over a car for speeding. Anchoring and adjustment is assumed to bias estimates of risk and uncertainty (Yamagishi, 1994), judgments of self-efficacy (Cervone & Peake, 1986), and predictions of future performance (Czaczkes & Ganzach, 1996). The anchoring bias describes the common human tendency to […] Decision framing 5. Anchoring and adjustment 4. Anchoring and adjustment is a heuristic used in many situations where people estimate a number. In 1974 cognitive psychologists Daniel Kahneman and Amos Tversky identified what is known as the “anchoring heuristic.” A heuristic is essentially a mental shortcut or rule of thumb the brain uses to simplify complex problems in order to make decisions (also known as a cognitive bias). Anchoring and adjustment heuristic. So rather than ask for $3,000 for the car, they ask for$5,000. The Anchoring Heuristic, also know as focalism, refers to the human tendency to accept and rely on, the first piece of information received before making a decision. People start with an anchor and then adjust their inference away from that anchor with cognitive effort ( Epley et al., 2004 ). Anchoring and Adjustment Heuristics the tendency to judge the frequency or likelihood of an event by using a starting point (called an anchor) and then making adjustments up and down Know these examples: According to this heuristic, people start with an implicitly suggested reference point (the "anchor") and make adjustments to it to reach their estimate. According to this heuristic, people start with an implicitly suggested reference point (the "anchor") and make adjustments to it to reach their estimate. The anchoring and adjustment heuristic was first theorized by Amos Tversky and Daniel Kahneman.In one of their first studies, participants were asked to compute, within 5 seconds, the product of the numbers one through eight, either as 1 \times 2 \times 3 \times 4 \times 5 \times 6 \times 7 \times 8 or reversed as 8 \times 7 \times 6 \times 5 \times 4 \times 3 \times 2 \times 1. Examples of common heuristics include anchoring and adjustment, theavailability heuristic, the representitaveness heuristic, naive diversification, escalation of commitment, and the familiarity heuristic. For example: “ Is the population of Venezuela more or less than 50 million?” One strategy for doing so, using what Tversky and Kahneman (1974) called the anchoring-and-adjustment heuristic, is to start with an accessible value in the context and adjust from this value to arrive at an acceptable value (quantity). We look at how you can take advantage of the anchoring effect to price your company's products or services, negotiate more effectively, market better, and make better business decisions. Anchoring or focalism is a term used in psychology to describe the common human tendency to rely too heavily, or "anchor," on one trait or piece of … Anchoring effects have traditionally been interpreted as a result of insufficient adjustment from an irrelevant value, but recent evidence casts doubt on this account. A well rounded brand uses anchoring in many subtle ways to get you to associate it with positive emotions. Representativeness heuristic 2. What exactly is anchoring in negotiation, and how does it play out at the bargaining table?. So, this heuristic has a lot to do with your memory of specific instances and what you’ve been exposed to. In other words, one factor is considered above all else in the decision-making processes. During decision making, anchoring occurs when individuals use an initial piece of information to make subsequent judgments. This anchoring-and-adjustment heuristic is assumed to underlie many intuitive judgments, and insufficient adjustment is commonly invoked to explain judgmental biases. TERMS. A heuristic in which one assumes commonality between objects because they look similar. An important notion in the anchoring-and-adjustment mechanism is that the motivation for adjustments matters for the final judgment of affect, and that adjustment is a serial process. Mental Model: Anchoring. According to this heuristic, people start with an implicitly suggested reference point (the "anchor") and make adjustments to it to reach their estimate. For example, used car salesmen often use ‘anchors’ to start negotiations. ... 3.6: The Anchoring-and-Adjustment Heuristic According to Tversky and Kahneman's original description, it involves starting from a readily available number—the "anchor"—and shifting either up or down to reach an answer that seems plausible. Anchoring heuristic examples keyword after analyzing the system lists the list of keywords related and the list of websites Anchoring and Adjustment Heuristic anchoring-heuristic The Anchoring and Adjustment Heuristic • People I'll briefly discuss some experiments and examples about the three heuristics. The initial point, known as the anchor, can come from the way a problem is framed, from historical factors, or from random information. Anchoring and adjustment is a psychological heuristic said to influence the way people assess probabilities intuitively.. In psychology, this type of cognitive bias is known as the anchoring bias or anchoring effect. Heuristics and Biases (Tversky and Kahneman 1974) Heuristics are used to reduce mental effort in decision making, but they may lead to systematic biases or errors in judgment. To make a final decision, he implicitly adjusts his estimate towards the anchor.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7544637322425842, "perplexity": 1640.8509545577097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039398307.76/warc/CC-MAIN-20210420122023-20210420152023-00273.warc.gz"}
https://codeyarns.com/tech/2014-07-09-how-to-add-subfigures-using-subfigure-package.html
# How to add subfigures using subfigure package 📅 2014-Jul-09 ⬩ ✍️ Ashwin Nanjappa ⬩ 🏷️ figure, latex, subfigure ⬩ 📚 Archive Multiple figures are sometimes arranged together in a single figure in a paper or book written in LaTeX. This can be done using subfig and subfloat packages. I recently discovered the subfigure package which can be used to achieve the same with less code! Below is LaTeX code to arrange three subfigures in two rows, one figure in top row and two figures in bottom row: % Arrange three figures like this: % Fig 1 % Fig 2 Fig 3 \begin{figure} \centering \subfigure[] { \includegraphics[scale=.2]{foo1.pdf} } \\ \subfigure[] { \includegraphics[scale=.5]{foo2.pdf} } \subfigure[] { \includegraphics[scale=.26]{foo3.pdf} } \caption { (a) blah (b) blah (c) blah } \label{fig:foobar} \end{figure} Tried with: Ubuntu 14.04 © 2022 Ashwin Nanjappa • All writing under CC BY-SA license • 🐘 @[email protected]📧
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9684681296348572, "perplexity": 13137.966510083894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00221.warc.gz"}
https://math.stackexchange.com/questions/578765/does-current-foundation-of-first-order-logic-need-a-fundamental-change
Does current foundation of first order logic need a fundamental change? Note the following (not too exact) correspondence between natural and formal languages. a. In a natural language we begin with a set of alphabets. a'. In a first order language we begin with a set of symbols. b. In a natural language we construct (meaningful/legitimated) words from alphabets using particular rules. So an arbitrary finite sequence of alphabets is not necessarily a meaningful word. b'. In a first order language we construct (meaningful/legitimated) terms from symbols using particular rules. So an arbitrary finite sequence of symbols is not necessarily a meaningful term. c. In a natural language we construct (meaningful/legitimated) sentences from words using particular rules. So an arbitrary finite sequence of words is not necessarily a meaningful sentence. c'. In a first order language we construct (meaningful/legitimated) sentences (formulas) from terms using particular rules. So an arbitrary finite sequence of terms is not necessarily a meaningful sentence (formula). d. In a natural language we construct (meaningful/legitimated) texts from sentences using particular rules. So an arbitrary finite sequence of sentences is not necessarily a meaningful text. d'. In a first order language we construct (meaningful/legitimated) theories from sentences without any rules. So an arbitrary (finite or infinite) set of sentences is a theory. Question 1: Why the line of producing new legitimated objects using former and simpler legitimated objects is broken in theories of first order logic? Question 2: Are there logics with particular rules for producing legitimated theories from sentences? Question 3: Is there a reasonable criterion to determine which sequence of first order sentences is a legitimated first order theory? • I would actually argue it's a strength of first order logic that it places practically no constraints on what counts as a theory. Perhaps even the primary strength. – Malice Vidrine Nov 24 '13 at 1:59 • I don't understand your a--a' and b--b correspondences. All written natural languages consist of one- or two-dimensional arrays of symbols from a finite alphabet ("alphabet" being a set of symbols broadly interpreted to include boundary indicators, punctuation, diacritics, typographic distinctions, Chinese characters, etc., as required). The atoms of a natural written language, just as with a formal language, are thus symbols, not sets of symbols (or alphabets). – John Bentin Nov 24 '13 at 14:51 • Most natural languages have never been written. The natural form of every natural language is either speech or, in the case of Nicaraguan Sign Language, gesture. It isn’t even true that the atomic elements of natural languages are words, since there is no satisfactory cross-linguistic definition of word. You might be able to justify a rough equivalence between mathematical symbols and morphemes, though even that is pretty shaky. That said, I agree with Trevor that proofs are a better analogue of texts. – Brian M. Scott Nov 25 '13 at 1:26 I think that there is no reason to expect the analogy to continue with d and d'. The intent of a formal theory is just to assert that the sentences it contains are all true. Even if we assume that the intent of uttering a natural language sentence is to assert its truth, the intent of a natural language text is usually not just to assert that the truth of all its component sentences. In particular, in a natural language text it is desirable to follow a kind of natural progression from one sentence to the next. Probably a better analogy would be between formal proofs and natural language texts. Like a formal theory, a formal proof can be represented as a sequence of sentences, but unlike a formal theory there are rules and conventions for its formation, and it is supposed to follow a logical progression from one sentence to the next. • Your correspondence between formal proofs in first order language and texts in natural languages is interesting. Is there any similar structure in natural languages for first order theories? – user108850 Nov 24 '13 at 3:58 (This is not quite an answer, but you might still find it useful enough.) You need to discern between syntax and semantics. While the sentence "The dog programmed a cat to force a power set" is syntactically correct (I hope), it is semantically meaningless. In first order logic, the theory $\{p,\lnot p\}$, while formally a theory (it is a set of well-formed sentences, given $p$ is such) it is inconsistent and therefore semantically meaningless. We are not interested in every theory, we are interested in the theories which are not inconsistent, or at least not exhibiting obvious proofs of inconsistency1 (assuming some reasonable foundational theory in the background, e.g. $\sf PA$ or $\sf ZFC$). So your point, while correct, misses the point. Meaningfulness is semantic consistency, and in first-order logic we have the completeness theorem which tells us that a theory is consistent if and only if it has a meaning. It seems to me, therefore, that all your questions are about consistency. That we should allow creating theories only when we can ensure they are consistent (and indeed in one model theory course that I took a theory was always assumed to be consistent within the definition). For this the compactness theorem is wonderful. It tells us that a theory is consistent if and only if the conjunction every finite fragment is not a false sentence. Which gives us a wonderful criterion for meaningful theories. Footnotes. 1. Much like humans, we are interested in information which sounds meaningful, but after some investigation we may conclude that it is pure nonsense, this is the analogy to theories which we cannot prove their consistency - but have not disproved them yet either. • The not too exact phrase in my first sentence refers to this distinction between "syntax" and "semantics" which you correctly expressed. But the soul of my question is about the method of constructing new complicated syntactically legitimated objects from simple ones in first order logic which is suddenly broken when we want to produce theories. The way which we define theories in first order logic shows a deep difference with the ways which we define other lingual objects like "symbols", "terms" and "formulas". – user108850 Nov 24 '13 at 2:29 • But the point is that in natural language we may construct syntactically correct, but meaningless texts. It seems that the tool you want is consistency. Again... you can merge two theories if the union is consistent. – Asaf Karagila Nov 24 '13 at 2:34 • I infer from item d in the question that you do not consider poetry to be meaningful/legitimated texts, because poets would undoubtedly object to the idea that there are rules governing how they can assemble sentences (or even requiring them to use sentences). Even apart from poetry, I would be interested to see the rules governing how texts can be assembled in, say, postmodernist philosophy. (I won't object if you declare those texts meaningless, but declaring them illegitimate might be going a bit too far.) – Andreas Blass Nov 24 '13 at 2:53 • I see the point you said. But I feel something is wrong here. Based on the natural meaning of the word "theory", a theory/story/poem ... is not just an accumulation of some sentences without any discipline. There is a structure on the sentences in a particular text of the natural language like a theory/story/poem. – user108850 Nov 24 '13 at 3:04 • @SaintGeorg: Nothing is wrong here. All you’re saying is that in first order logic the word theory has a precise meaning that does not include all of the connotations of the everyday sense(s) of the word. That’s not at all unusual when everyday words are given precise technical meanings in some discipline. – Brian M. Scott Nov 25 '13 at 1:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7272581458091736, "perplexity": 634.4520811002278}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323221.23/warc/CC-MAIN-20190825062944-20190825084944-00530.warc.gz"}
http://www.chemgapedia.de/vsengine/tra/vsc/en/ch/2/tra/pericyclische_reaktionen.tra/Vlu/vsc/en/ch/2/vlu/pericyclische_reaktionen/peri_aroma.vlu/Page/vsc/en/ch/2/oc/reaktionen/formale_systematik/pericyclische_reaktionen/aromatizitaet/beispiele.vscml.html
# Pericyclic Reactions: Aromaticity of Transition States ## Aromaticity: Examples Fig.1 Reaction equation Fig.2 Orbital model A total of 6 p orbitals from butadiene and ethylene take part in the reaction. Two σ bonds and one π bond are formed from three π bonds during the reaction. The mechanism is formally described by three arrows indicating the flow of electrons with each arrow representing two electrons. Since the number of sign inversions in the transition state is even (in this case zero), the system is a Hückel system. The cyclic transition state with 4n+2 electrons (6 electrons in this case) is aromatic and, therefore, allowed. Fig.3 Reaction equation Fig.4 Orbital model The basis set consists of two p orbitals, one $sp3$ and one s orbital. Four electrons take part in the reaction indicated by two arrows that show the flow of electrons. The number of sign inversions in the transition state is even (zero in this case), therefore, the system is Hückel-antiaromatic, i.e., disallowed. Fig.5 Reaction equation Fig.6 Orbital model A methyl group is being transferred in this example. The participating p orbital from the methyl group can be shown to indicate conjugation (red line) passing through the origin of the p orbital. This does not count as sign inversion. Therefore, the number of sign inversions is odd (one in this case), and we are dealing with a Möbius system. The system is aromatic because four electrons are involved and the reaction is allowed. Migration of the methyl group proceeds with inversion, similar to the $SN2$ reaction. This can be observed experimentally only when the migrating group is chiral, i.e., contains four different substituents. The following simple rule involving sterochemistry can be set up: Inversion takes place if conjugation at a reaction center passes through the origin of the orbital. Page 5 of 7
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8587680459022522, "perplexity": 1374.2393502864322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821189.10/warc/CC-MAIN-20171017125144-20171017145144-00608.warc.gz"}
https://bodheeprep.com/cat-quant-questions-solutions/4
# CAT Quant Questions with Video Solutions Note: These Quant questions have been selected from 1000+ CAT Quant Practice Problems with video solutions of Bodhee Prep’s Online CAT Quant Course Question 16: Let $N = 1! \times 2! \times 3! \times ..... \times 99! \times 100!$, and if $\frac{N}{{p!}}$ is a perfect square for some positive integer $p \le 100$, then find the value of p. Topic: factorials Question 17: If ${x^2} + {y^2} = 1$ , find the maximum value of ${x^2} + 4xy - {y^2}$ Topic: maxima minima [1] $1$ [2] $\sqrt 2$ [3] $\sqrt 5$ [4] $4$ Question 18: The compound interest on a certain amount for two years is Rs. 291.2 and the simple interest on the same amount is Rs. 280. If the rate of interest is same in both the cases, find the Principal amount Topic: sici [1] 1200 [2] 1400 [3] 1700 [4] 1750 Question 19: In the diagram given below, the circle and the square have the same center O and equal areas. The circle has radius 1 and intersects one side of the square at P and Q. What is the length of PQ? Topic: circles [1] 1 [2] 3/2 [3] $\sqrt {4 - \pi }$ [4] $\sqrt {\pi - 1}$ Question 20: What is the remainder when ${{x}^{276}}+12$ is divided by ${{x}^{2}}+x+1$ given that the remainder is a positive integer? Topic: remainders ### CAT Quant Practice Sets [Video Explanations] CAT Quant Questions Set 01 CAT Quant Questions Set 02 CAT Quant Questions Set 03 CAT Quant Questions Set 05 CAT Quant Questions Set 06 #### CAT Quant Online Course • 1000+ Practice Problems • Detailed Theory of Every Topics • Online Live Sessions for Doubt Clearing • All Problems with Video Solutions ### 8 thoughts on “CAT Quant Questions with Video Solutions” 1. MAHESH AGGARWAL says: do you have exclusive PACKAGE OF video solutions of last 10-15 years CAT EXAMS? I AM INTERESTED IN JUST THAT. I AM HELPING A GIRL APPEARING FOR CAT 2019 EXAM We have already included all the good questions from CAT and other MBA entrance exams in our course. All these questions are with Video explanations • Pavani says: If f is 3 F(3) 6+3+2 is 11 2. Rajaraman says: Can you tell the name of the theorem that you said in the first quetion 3. Abinash says: Sir, for question no. 23:- we can do as x+y=2-z => cubing both sides:- x3+y3+z3=8-(2-z)(6z+3xy) =>as given that x3+y3+z3=8, then (2-z)(6z+3xy)=0 => z=2(considering an integer value for easy output) ,now putting z value in every eqn given :- x+y=0 x2+y2=2 x3+y3=0 from the above three eqns we find that if one of x or y is +ve then another ll be -ve but both ll be of same magnitude i.e. (+-)1….thus x4+y4+z4=18 4. Siddharth says: Set 1 Question 5. I want to know the below logic would be wrong. Distance is constant. If the Speed increases by 10km/hr, the time decreases by 4 hours. So to decrease time by 2 hours, Speed can be increased by 5km/hr. 20 + 5= 25 kmph. I understand something might be wrong with this logic but could someone help pinpoint that? • shaswat says: the assumption of “to decrease time by 2 hours, speed can be increase by 5km/hr” is wrong. it would be right if you know the initial time and consider from the start but since the journey is already going on, u can’t do that.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8440561294555664, "perplexity": 2963.0028963762647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213454.52/warc/CC-MAIN-20200924034208-20200924064208-00080.warc.gz"}
https://tex.stackexchange.com/questions/409121/beamer-different-footline-on-the-last-page
# Beamer : different footline on the last page Footline of my beamer presentation contains page numbers. Is it possible to add a different footline to the last frame (something like "this is the last slide") ? Please note that, I am talking about adding the command to the preamble, so that I don't have to manually type the required footline on the last page. A minimal working example is: \documentclass{beamer} \begin{document} \begin{frame} First page \end{frame} \begin{frame} Last page Footline - "last slide" is needed here. \end{frame} \end{document} I tried this method - Footer on last page (for article or book class), which is not working for beamer. Any help on this is highly appreciated. Thanks in advance. \documentclass{beamer} \setbeamertemplate{footline}{% last frame \else footline of normal slides \fi } \begin{document} \begin{frame} First page \end{frame} \begin{frame} Last page Footline - "last slide" is needed here. \end{frame} \end{document} • How to modify \if\insertframenumber\inserttotalframenumber if I want it to refer to the first (title) frame? Or frame No. 4? – Viesturs Jan 25 '18 at 21:33 • @Viesturs Why do you want to modify the footline of the titlepage? To remove it? – samcarter_is_at_topanswers.xyz Jan 25 '18 at 21:49 • Yes. I saw in tex.stackexchange.com/questions/18828/… that one can use \setbeamertemplate{footline}{} in the scope of the titlepage, but I am nevertheless curious how to modify that expression. – Viesturs Jan 25 '18 at 21:55 • @Viesturs \if\insertframenumber1, but such hard codes values are quite inflexible. What if one of your presentations will have no title page or the title page will not be the first slide? I think such a change should be made from within the presentation, e.g. \begin{frame}[plain] \titlepage \end{frame} – samcarter_is_at_topanswers.xyz Jan 25 '18 at 21:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3873448669910431, "perplexity": 2030.5437136464238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146186.62/warc/CC-MAIN-20200226023658-20200226053658-00020.warc.gz"}
https://www.physicsforums.com/threads/chemistry-exam-qn.123325/
# Chemistry exam qn 1. Jun 9, 2006 ### Ukitake Jyuushirou someone could juz tell me roughly how to work out this set of qn plz? i have been thinking and doing alot of workings but none of the ans is remotely close to the ans... :( 1) the reaction below releases 56.6kj of heat at 298k for each mole of NO2 formed at a constant pressure of 1 atm. what is the standard enthalpy of formation of NO2 given the standard enthalpy of NO is 90.4kj mol 2NO + O2 ---> 2NO2 2) a 200g of copper at 100 degrees celsius is dropped into 1000g of water at 25 degrees celsius. what is the final temp of the system? specific heat of water is 4.18J and copper is 0.400 J 3) if the equilibrium constant for A + B <===> C is 0.123, the equilibrium constant when 2C <===> 2A + 2B is? 2. Jun 9, 2006 ### Hootenanny Staff Emeritus Question Two HINT: Energy lost by copper is equal to the energy gained by the water. Try setting up simultaneous equations. Question Three HINT: You have increased the concentration of all the reactants equally. 3. Jun 10, 2006 ### Saketh For Question One, use the fact that $$\Delta H = \Sigma (\Delta H_{products})-\Sigma (\Delta H_{reactants})$$. You know that since oxygen is a pure element, its heat of formation is zero. You know that $$\Delta H$$, and you know $$\Sigma (\Delta H_{reactants})$$. You have to find $$\Sigma (\Delta H_{products})$$. For Question Two, you know that $$q = mc(T_{f} - T_{i})$$. Find the heat that the copper is holding. Now that you know $$q$$, you also know that it is all transferred to the water, so write another q-equation, but this time you are solving for T of the water. Find the equilibrium temperature of the water and copper system - that is your $$T_{f}$$ for the copper. You know the $$T_{i}$$ for both the copper and the water, so all you do now is plug and chug. For Question Three, write the $$K_{eq}$$ equation for A + B <===> C, then write it for 2C <===> 2A + 2B. Remember that when you flip the reactants and the products, you have to take the reciprocal of $$K_{eq}$$, and that when you multiply the coefficients all by a number $$N$$, you have to raise all of the terms in the $$K_{eq}$$ equation to that power $$N$$. So, for example: The equilibrium constant for a reaction A + B + C <===> D + E + F is $$\frac{[D][E][F]}{[A][C]}$$. For 3D + 3E + 3F <===> 3A + 3B + 3C, it is $$\frac{[A]^{3}^{3}[C]^{3}}{[D]^{3}[E]^{3}[F]^{3}}$$. Last edited: Jun 10, 2006 Similar Discussions: Chemistry exam qn
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185817241668701, "perplexity": 796.3617616251872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00159.warc.gz"}
https://ghc.haskell.org/trac/ghc/wiki/TypeFunctions/Ambiguity?version=2
Version 2 (modified by simonpj, 6 years ago) (diff) -- # Ambiguity The question of ambiguity in Haskell is a tricky one. This wiki page is a summary of thoughts and definitions, in the hope of gaining clarity. I'm using a wiki because it's easy to edit, and many people can contribute, even though you can't typeset nice rules. [Started Jan 2010.] Please edit to improve. ## Terminology A type system is usually specified by • A specification, in the form of some declarative typing rules. These rules often involve "guessing types". Here is a typical example, for variables: (f : forall a1,..,an. C => tau) \in G theta = [t1/a1, ..., tn/an] -- Substitution, guessing ti Q |= theta( C ) ------------------------- (VAR) Q, G |- f :: theta(tau) The preconditions say that f is in the environment G with a suitable polymorphic type. We "guess" types t1..tn, and use them to instantiate f's polymorphic type variables a1..an, via a substitution theta. Under this substitution f's instantiated constraints theta(C) must be deducible (using |=) from the ambient constraints Q. The point is that we "guess" the ai. • An inference algorithm, often also presented using similar-looking rules, but in a form that can be read as an algorithm with no "guessing". Typically • The "guessing" is replaced by generating fresh unification variables. ## Coherence Suppose we have (I conflate classes Read and Show into one class Text for brevity): class Text a where show :: a -> String x :: String The trouble is that there is a constraint (Text t), where t is a type variable that is otherwise unconstrained. Moreover, the type that we choose for t affects the semantics of the program. For example, if we chose t = Int then we might get x = "3", but if we choose t = Float we might get x = "3.7". This is bad: we want our type system to be coherent in the sense that every well-typed program has but a single value. In practice, the Haskell Report, and every Haskell implementation, rejects such a program saying something like Cannot deduce (Text t) from () In algorithmic terms this is very natural: we indeed have a constraint (Text t) for some unification variable t, and no way to solve it, except by searching for possible instantiations of t. So we simply refrain from trying such a search. But in terms of the type system specification it is harder. Usually a Problem 1: how can w
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8902904987335205, "perplexity": 2418.265171871848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645348533.67/warc/CC-MAIN-20150827031548-00034-ip-10-171-96-226.ec2.internal.warc.gz"}
https://debraborkovitz.com/10-random-questions/
Hints will display for most wrong answers; explanations for most right answers.   You can attempt a question multiple times; it will only be scored correct if you get it right the first time.  To see ten new questions, reload the page. I used the official objectives and sample test to construct these questions, but cannot promise that they accurately reflect what’s on the real test.   Some of the sample questions were more convoluted than I could bear to write.   See terms of use.   See the MTEL Practice Test main page to view questions on a particular topic or to download paper practice tests. MTEL General Curriculum Mathematics Practice Question 1 Which of the numbers below is a fraction equivalent to $$0.\bar{6}$$? A $$\large \dfrac{4}{6}$$Hint: $$0.\bar{6}=\dfrac{2}{3}=\dfrac{4}{6}$$ B $$\large \dfrac{3}{5}$$Hint: This is equal to 0.6, without the repeating decimal. Answer is equivalent to choice c, which is another way to tell that it's wrong. C $$\large \dfrac{6}{10}$$Hint: This is equal to 0.6, without the repeating decimal. Answer is equivalent to choice b, which is another way to tell that it's wrong. D $$\large \dfrac{1}{6}$$Hint: This is less than a half, and $$0.\bar{6}$$ is greater than a half. Question 1 Explanation: Topic: Converting between fraction and decimal representations (Objective 0017) Question 2 Each number in the table above represents a value W that is determined by the values of x and y.  For example, when x=3 and y=1, W=5.  What is the value of W when x=9 and y=14?  Assume that the patterns in the table continue as shown. A $$\large W=-5$$Hint: When y is even, W is even. B $$\large W=4$$Hint: Note that when x increases by 1, W increases by 2, and when y increases by 1, W decreases by 1. At x=y=0, W=0, so at x=9, y=14, W has increased by $$9 \times 2$$ and decreased by 14, or W=18-14=4. C $$\large W=6$$Hint: Try fixing x or y at 0, and start by finding W for x=0 y=14 or x=9, y=0. D $$\large W=32$$Hint: Try fixing x or y at 0, and start by finding W for x=0 y=14 or x=9, y=0. Question 2 Explanation: Topic: Recognize and extend patterns using a variety of representations (e.g., verbal, numeric, pictorial, algebraic) (Objective 0021) Question 3 A sphere Hint: All views would be circles. A cone Hint: Two views would be triangles, not rectangles. A pyramid Hint: How would one view be a circle? Question 3 Explanation: Topic: Match three-dimensional figures and their two-dimensional representations (e.g., nets, projections, perspective drawings) (Objective 0024). Question 4 212 Hint: Can the number of toothpicks be even? 213 Hint: One way to see this is that every new "house" adds 4 toothpicks to the leftmost vertical toothpick -- so the total number is 1 plus 4 times the number of "houses." There are many other ways to look at the problem too. 217 Hint: Try your strategy with a smaller number of "houses" so you can count and find your mistake. 265 Hint: Remember that the "houses" overlap some walls. Question 4 Explanation: Topic: Recognize and extend patterns using a variety of representations (e.g., verbal, numeric, pictorial, algebraic). (Objective 0021). Question 5 Which of the following values of x satisfies the inequality $$\large \left| {{(x+2)}^{3}} \right|<3?$$ A $$\large x=-3$$Hint: $$\left| {{(-3+2)}^{3}} \right|$$=$$\left | {(-1)}^3 \right |$$=$$\left | -1 \right |=1$$ . B $$\large x=0$$Hint: $$\left| {{(0+2)}^{3}} \right|$$=$$\left | {2}^3 \right |$$=$$\left | 8 \right |$$ =$$8$$ C $$\large x=-4$$Hint: $$\left| {{(-4+2)}^{3}} \right|$$=$$\left | {(-2)}^3 \right |$$=$$\left | -8 \right |$$ =$$8$$ D $$\large x=1$$Hint: $$\left| {{(1+2)}^{3}} \right|$$=$$\left | {3}^3 \right |$$=$$\left | 27 \right |$$ = $$27$$ Question 5 Explanation: Topics: Laws of exponents, order of operations, interpret absolute value (Objective 0019). Question 6 What is the perimeter of the window glass? A $$\large 3x+\dfrac{\pi x}{2}$$Hint: By definition, $$\pi$$ is the ratio of the circumference of a circle to its diameter; thus the circumference is $$\pi d$$. Since we have a semi-circle, its perimeter is $$\dfrac{1}{2} \pi x$$. Only 3 sides of the square contribute to the perimeter. B $$\large 3x+2\pi x$$Hint: Make sure you know how to find the circumference of a circle. C $$\large 3x+\pi x$$Hint: Remember it's a semi-circle, not a circle. D $$\large 4x+2\pi x$$Hint: Only 3 sides of the square contribute to the perimeter. Question 6 Explanation: Topic: Derive and use formulas for calculating the lengths, perimeters, areas, volumes, and surface areas of geometric shapes and figures (Objective 0023). Question 7 The expression $$\large{{8}^{3}}\cdot {{2}^{-10}}$$ is equal to which of the following? A $$\large 2$$Hint: Write $$8^3$$ as a power of 2. B $$\large \dfrac{1}{2}$$Hint: $$8^3 \cdot {2}^{-10}={(2^3)}^3 \cdot {2}^{-10}$$ =$$2^9 \cdot {2}^{-10} =2^{-1}$$ C $$\large 16$$Hint: Write $$8^3$$ as a power of 2. D $$\large \dfrac{1}{16}$$Hint: Write $$8^3$$ as a power of 2. Question 7 Explanation: Topic: Laws of Exponents (Objective 0019). Question 8 The function d(x) gives the result when 12 is divided by x.  Which of the following is a graph of d(x)? A Hint: d(x) is 12 divided by x, not x divided by 12. B Hint: When x=2, what should d(x) be? C Hint: When x=2, what should d(x) be? D Question 8 Explanation: Topic: Identify and analyze direct and inverse relationships in tables, graphs, algebraic expressions and real-world situations (Objective 0021) Question 9 Four children randomly line up, single file.  What is the probability that they are in height order, with the shortest child in front?   All of the children are different heights. A $$\large \dfrac{1}{4}$$Hint: Try a simpler question with 3 children -- call them big, medium, and small -- and list all the ways they could line up. Then see how to extend your logic to the problem with 4 children. B $$\large \dfrac{1}{256}$$Hint: Try a simpler question with 3 children -- call them big, medium, and small -- and list all the ways they could line up. Then see how to extend your logic to the problem with 4 children. C $$\large \dfrac{1}{16}$$Hint: Try a simpler question with 3 children -- call them big, medium, and small -- and list all the ways they could line up. Then see how to extend your logic to the problem with 4 children. D $$\large \dfrac{1}{24}$$Hint: The number of ways for the children to line up is $$4!=4 \times 3 \times 2 \times 1 =24$$ -- there are 4 choices for who is first in line, then 3 for who is second, etc. Only one of these lines has the children in the order specified. Question 9 Explanation: Topic: Apply knowledge of combinations and permutations to the computation of probabilities (Objective 0026). Question 10 Commutative Property. Hint: For addition, the commutative property is $$a+b=b+a$$ and for multiplication it's $$a \times b = b \times a$$. Associative Property. Hint: For addition, the associative property is $$(a+b)+c=a+(b+c)$$ and for multiplication it's $$(a \times b) \times c=a \times (b \times c)$$ Identity Property. Hint: 0 is the additive identity, because $$a+0=a$$ and 1 is the multiplicative identity because $$a \times 1=a$$. The phrase "identity property" is not standard. Distributive Property. Hint: $$(25+1) \times 16 = 25 \times 16 + 1 \times 16$$. This is an example of the distributive property of multiplication over addition. Question 10 Explanation: Topic: Analyze and justify mental math techniques, by applying arithmetic properties such as commutative, distributive, and associative (Objective 0019). Note that it's hard to write a question like this as a multiple choice question -- worthwhile to understand why the other steps work too. There are 10 questions to complete. If you found a mistake or have comments on a particular question, please contact me (please copy and paste at least part of the question into the form, as the numbers change depending on how quizzes are displayed).   General comments can be left here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7318821549415588, "perplexity": 1142.321727129666}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060201.9/warc/CC-MAIN-20210928032425-20210928062425-00056.warc.gz"}
https://www.physicsforums.com/threads/how-do-i-know-if-this-field-has-a-mass-term.185174/
# How do I know if this field has a mass term? 1. Sep 17, 2007 ### Lecticia 1. Special Relativity 2. The problem statement, all variables and given/known data Consider this Lagrangian: L=(1/2) (\partial_{\mu} \Psi)(\partial^{mu} \Psi) + \exp(-(a\times \Psi)^2) Have this field a mass term? 2. Relevant equations 3. The attempt at a solution 2. Sep 17, 2007 ### Staff: Mentor Is this what one intended to write, or is this given in some text? $$L= \frac{1}{2} (\partial_{\mu} \Psi)(\partial^{\mu} \Psi) + e^{-{(a \Psi)}^2}$$ 3. Sep 17, 2007 ### Lecticia Yes, exactly this Lagrangian, where \Psi is a scalar field. Last edited: Sep 17, 2007 4. Sep 17, 2007 ### nrqed This is a nonlinear field theory so, strictly speaking, there is no clear meaning for a mass term. But I am guessing that they want you to treat the parameter "a" as small and to do a Taylor expansion of the exponential. If you do that, you will generate a mass term. That's my guess. 5. Sep 17, 2007 ### dextercioby Well, just one question, you have the lagrangian, what are the field eqn's ? 6. Sep 17, 2007 ### Lecticia Do you mean the motion equations? 7. Sep 17, 2007 ### Lecticia Similar Discussions: How do I know if this field has a mass term?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920159101486206, "perplexity": 2699.703151083556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191984.96/warc/CC-MAIN-20170322212951-00459-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.jobilize.com/physics2/course/12-1-the-biot-savart-law-sources-of-magnetic-fields-by-openstax?qcr=www.quizover.com
# 12.1 The biot-savart law Page 1 / 4 By the end of this section, you will be able to: • Explain how to derive a magnetic field from an arbitrary current in a line segment • Calculate magnetic field from the Biot-Savart law in specific geometries, such as a current in a line and a current in a circular arc We have seen that mass produces a gravitational field and also interacts with that field. Charge produces an electric field and also interacts with that field. Since moving charge (that is, current) interacts with a magnetic field, we might expect that it also creates that field—and it does. The equation used to calculate the magnetic field produced by a current is known as the Biot-Savart law. It is an empirical law named in honor of two scientists who investigated the interaction between a straight, current-carrying wire and a permanent magnet. This law enables us to calculate the magnitude and direction of the magnetic field produced by a current in a wire. The Biot-Savart law    states that at any point P ( [link] ), the magnetic field $d\stackrel{\to }{B}$ due to an element $d\stackrel{\to }{l}$ of a current-carrying wire is given by $d\stackrel{\to }{B}=\frac{{\mu }_{0}}{4\pi }\phantom{\rule{0.2em}{0ex}}\frac{Id\stackrel{\to }{l}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{^}{r}}{{r}^{2}}.$ The constant ${\mu }_{0}$ is known as the permeability of free space    and is exactly ${\mu }_{0}=4\pi \phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{\text{−7}}\text{T}\cdot \text{m/A}$ in the SI system. The infinitesimal wire segment $d\stackrel{\to }{l}$ is in the same direction as the current I (assumed positive), r is the distance from $d\stackrel{\to }{l}$ to P and $\stackrel{^}{r}$ is a unit vector that points from $d\stackrel{\to }{l}$ to P , as shown in the figure. The direction of $d\stackrel{\to }{B}$ is determined by applying the right-hand rule to the vector product $d\stackrel{\to }{l}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{^}{r}.$ The magnitude of $d\stackrel{\to }{B}$ is $dB=\frac{{\mu }_{0}}{4\pi }\phantom{\rule{0.2em}{0ex}}\frac{I\phantom{\rule{0.2em}{0ex}}dl\phantom{\rule{0.2em}{0ex}}\mathrm{sin}\phantom{\rule{0.1em}{0ex}}\theta }{{r}^{2}}$ where $\theta$ is the angle between $d\stackrel{\to }{l}$ and $\stackrel{^}{r}.$ Notice that if $\theta =0,$ then $d\stackrel{\to }{B}=\stackrel{\to }{0}.$ The field produced by a current element $Id\stackrel{\to }{l}$ has no component parallel to $d\stackrel{\to }{l}.$ The magnetic field due to a finite length of current-carrying wire is found by integrating [link] along the wire, giving us the usual form of the Biot-Savart law. ## Biot-savart law The magnetic field $\stackrel{\to }{B}$ due to an element $d\stackrel{\to }{l}$ of a current-carrying wire is given by $\stackrel{\to }{B}=\frac{{\mu }_{0}}{4\pi }\underset{\text{wire}}{\int }\frac{I\phantom{\rule{0.2em}{0ex}}d\stackrel{\to }{l}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{^}{r}}{{r}^{2}}.$ Since this is a vector integral, contributions from different current elements may not point in the same direction. Consequently, the integral is often difficult to evaluate, even for fairly simple geometries. The following strategy may be helpful. ## Problem-solving strategy: solving biot-savart problems To solve Biot-Savart law problems, the following steps are helpful: 1. Identify that the Biot-Savart law is the chosen method to solve the given problem. If there is symmetry in the problem comparing $\stackrel{\to }{B}$ and $d\stackrel{\to }{l},$ Ampère’s law may be the preferred method to solve the question. 2. Draw the current element length $d\stackrel{\to }{l}$ and the unit vector $\stackrel{^}{r},$ noting that $d\stackrel{\to }{l}$ points in the direction of the current and $\stackrel{^}{r}$ points from the current element toward the point where the field is desired. 3. Calculate the cross product $d\stackrel{\to }{l}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{^}{r}.$ The resultant vector gives the direction of the magnetic field according to the Biot-Savart law. 4. Use [link] and substitute all given quantities into the expression to solve for the magnetic field. Note all variables that remain constant over the entire length of the wire may be factored out of the integration. 5. Use the right-hand rule to verify the direction of the magnetic field produced from the current or to write down the direction of the magnetic field if only the magnitude was solved for in the previous part. #### Questions & Answers define electric image.obtain expression for electric intensity at any point on earthed conducting infinite plane due to a point charge Q placed at a distance D from it. Mateshwar Reply explain the lack of symmetry in the field of the parallel capacitor Phoebe Reply pls. explain the lack of symmetry in the field of the parallel capacitor Phoebe does your app come with video lessons? Ahmed Reply What is vector Ajibola Reply Vector is a quantity having a direction as well as magnitude Damilare tell me about charging and discharging of capacitors Ahemen Reply a big and a small metal spheres are connected by a wire, which of this has the maximum electric potential on the surface. Bundi Reply 3 capacitors 2nf,3nf,4nf are connected in parallel... what is the equivalent capacitance...and what is the potential difference across each capacitor if the EMF is 500v Prince Reply equivalent capacitance is 9nf nd pd across each capacitor is 500v santanu four effect of heat on substances Prince Reply why we can find a electric mirror image only in a infinite conducting....why not in finite conducting plate..? Rima Reply because you can't fit the boundary conditions. Jorge what is the dimensions for VISCOUNSITY (U) Branda what is thermodynamics Aniket Reply the study of heat an other form of energy. John heat is internal kinetic energy of a body but it doesnt mean heat is energy contained in a body because heat means transfer of energy due to difference in temperature...and in thermo-dynamics we study cause, effect, application, laws, hypothesis and so on about above mentioned phenomenon in detail. ing It is abranch of physical chemistry which deals with the interconversion of all form of energy Vishal what is colamb,s law.? Muhammad Reply it is a low studied the force between 2 charges F=q.q`\r.r Mostafa what is the formula of del in cylindrical, polar media Birengeso Reply prove that the formula for the unknown resistor is Rx=R2 x R3 divided by R3,when Ig=0. MAXWELL Reply what is flux Bundi Reply Total number of field lines crossing the surface area Kamru Basically flux in general is amount of anything...In Electricity and Magnetism it is the total no..of electric field lines or Magnetic field lines passing normally through the suface prince what is temperature change Celine a bottle of soft drink was removed from refrigerator and after some time, it was observed that its temperature has increased by 15 degree Celsius, what is the temperature change in degree Fahrenheit and degree Celsius Celine process whereby the degree of hotness of a body (or medium) changes Salim Q=mcΔT Salim where The letter "Q" is the heat transferred in an exchange in calories, "m" is the mass of the substance being heated in grams, "c" is its specific heat capacity and the static value, and "ΔT" is its change in temperature in degrees Celsius to reflect the change in temperature. Salim what was the temperature of the soft drink when it was removed ? Salim 15 degree Celsius Celine 15 degree Celine ok I think is just conversion Salim 15 degree Celsius to Fahrenheit Salim 0 degree Celsius = 32 Fahrenheit Salim 15 degree Celsius = (15×1.8)+32 =59 Fahrenheit Salim I dont understand Celine the question said you should convert 15 degree Celsius to Fahrenheit Salim To convert temperatures in degrees Celsius to Fahrenheit, multiply by 1.8 (or 9/5) and add 32. Salim what is d final ans for Fahrenheit and Celsius Celine it said what is temperature change in Fahrenheit and Celsius Celine the 15 is already in Celsius Salim So the final answer for Fahrenheit is 59 Salim what is d final ans for Fahrenheit and Celsius Celine what are the effects of placing a dielectric between the plates of a capacitor Bundi Reply increase the capacitance. Jorge besides increasing the capacitance, is there any? Bundi mechanical stiffness and small size Jorge so as to increase the capacitance of a capacitor Rahma also to avoid diffusion of charges between the two plate since they are positive and negative. Prince ### Read also: #### Get the best University physics vol... course in your pocket! Source:  OpenStax, University physics volume 2. OpenStax CNX. Oct 06, 2016 Download for free at http://cnx.org/content/col12074/1.3 Google Play and the Google Play logo are trademarks of Google Inc. Notification Switch Would you like to follow the 'University physics volume 2' conversation and receive update notifications? By By Rhodes
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 31, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850638270378113, "perplexity": 691.3186269420236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479627.17/warc/CC-MAIN-20190215224408-20190216010408-00532.warc.gz"}
http://mathhelpforum.com/algebra/226533-simple-complex-numbers-question-can-show-me-solution-2.html
# Math Help - this is simple complex numbers question...can show me the solution? 1. ## Re: this is simple complex numbers question...can show me the solution? Originally Posted by romsek there's nothing in the original problem that stated p and q were real. We just assumed that. That's really beyond the pale. It's patently obvious. If you don't assume it you can't solve the problem. But the possibility was (unknowingly) incorporated in post #3, ie, it has already been addressed. Note: The definitions defining a complex number a+bi only apply if a and b are real. EDIT: OK, post #3: "Gather the real and imaginary parts together in the form a+bi = 0 and equate a and b to 0." That should have been the end of the thread. Explaining elementary arithmetic? Page 2 of 2 First 12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9292795658111572, "perplexity": 1616.72307144309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256757.9/warc/CC-MAIN-20140728011736-00359-ip-10-146-231-18.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/19457-cross-product-multiple-sets.html
# Thread: The cross product and multiple sets? 1. ## The cross product and multiple sets? Hi all, Just a quick question... I have a question in my Computer Science homework that goes as follows: (S x S) x S where S= {3,4} Now does that mean that I end up, after doing the first product, doing it normally across each of the four sets to make a total of 16 ordered pairs? Or do I have it so that I have several sets of three elements each? Thanks so much!!! 2. Originally Posted by srstakey Hi all, Just a quick question... I have a question in my Computer Science homework that goes as follows: (S x S) x S where S= {3,4} Now does that mean that I end up, after doing the first product, doing it normally across each of the four sets to make a total of 16 ordered pairs? Or do I have it so that I have several sets of three elements each? Thanks so much!!! S x S = 0, so ( S x S ) x S = 0 x S = 0 RonL 3. Originally Posted by srstakey I have a question in my Computer Science homework that goes as follows: (S x S) x S where S= {3,4} I think that “cross product” here refers to Cartesian Cross Products of sets. If that is correct, then (SxS)xS would be a set of eight triples. 4. Originally Posted by Plato I think that “cross product” here refers to Cartesian Cross Products of sets. If that is correct, then (SxS)xS would be a set of eight triples. Yes-thank you Plato! (I never thought I would be thanking Plato himself )
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629631996154785, "perplexity": 995.5481096855812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122933.39/warc/CC-MAIN-20170423031202-00198-ip-10-145-167-34.ec2.internal.warc.gz"}
http://stackoverflow.com/questions/16943124/r-predict-glm-score-based-on-only-partial-records?answertab=active
# r predict glm score based on only partial records I have a glm based on data A and I'd like to score data B to do validation, but some records in B have missing data. Instead of these ending up without a score (na.omit) or being removed (na.exclude) I'd like them to end up with an outputted prediction that uses the model to determine a value based only on the data with values. A reproducible example... data(mtcars) model<-glm(mpg~.,data=mtcars) mtcarsNA<-mtcars NAins <- NAinsert <- function(df, prop = .1){ n <- nrow(df) m <- ncol(df) num.to.na <- ceiling(prop*n*m) id <- sample(0:(m*n-1), num.to.na, replace = FALSE) rows <- id %/% m + 1 cols <- id %% m + 1 sapply(seq(num.to.na), function(x){ df[rows[x], cols[x]] <<- NA } ) return(df) } mtcarsNA<-NAins(mtcarsNA,.4) mtcarsNA$mpg<-mtcars$mpg predict(model,newdata=mtcarsNA,type="response") Where I need the last line to return a result (non-NA) for all records. Can you point me in the direction of the code needed? - Sounds like you need to do imputation. I think there might be packages called (??) mi/mice, or try library("sos"); findFn("imputation") – Ben Bolker Jun 5 '13 at 14:58 Will take a look now, but to be clear I don't want to impute the missing values in the predictors and then get a score - I want to use only available data and use only the relevant coefficients, which could result in a lower score but fits the requirements I've been given – Steph Locke Jun 5 '13 at 15:02 So do you want to fill in zeros for the missing data? If y=a+b*x1+c*x2 and x2 is missing, what do you want y-hat to be? a+b*x1 or something else? I would normally suggest y=a+b*x1+c*x2bar where x2bar is the mean of x2 across non-missing cases, which is a (VERY) crude form of imputation ... – Ben Bolker Jun 5 '13 at 15:17 it should be a+b*x1 – Steph Locke Jun 5 '13 at 15:18 This doesn't make any sense. You should replace $x_2$ with $\bar{x}_2$. Mean is the probabilistically weighted estimate of $x_2$. But since you actually have a generalized linear model, you really need to do the following. Suppose your prediction function is $f(x_1, x_2, \ldots, x_n)$. Then the correct prediction with missing values $x_{n_1}, x_{n_2},\ldots x_{n_k}$ is $\int \ldots \int f(x_1, x_2, \ldots, x_n) p(x_{n_1}, \ldots , x_{n_k}) dx_{n_1}\ldots dx{n_k}$, where $p$ is the probability density. Replace integral with sum and density with probability for discrete outcomes. – SMeznaric Jun 15 '14 at 10:51 Based on the conversation in the comments, you want to replace NA values with zero before predicting. This seems dangerous/dubious to me -- use at your own risk. naZero <- function(x) { x[is.na(x)] <- 0; x } mtcarszero <- lapply(mtcarsNA,naZero) predict(model,newdata=mtcarszero,type="response") should be what you want. For categorical variables, if you are using default treatment contrasts, then I think the consistent thing to do is something like this: naZero <- function(x) { if (is.numeric(x)) { repVal <- 0 } else { if (is.factor(x)) { repVal <- levels(x)[1] } else stop("uh-oh") } x[is.na(x)] <- repVal x } - I might phrase my similar concern by saying that this is in no way "ignoring" or "not using" the missing values. You're including them but assuming they are all 0. Assuming all missing values have a value of pi wouldn't be ignoring them either. – joran Jun 5 '13 at 15:26 Great idea - elegantly simple. I have to work on it a bit for categorical variables but the concept is sound - thank you very much. – Steph Locke Jun 5 '13 at 15:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.627547562122345, "perplexity": 1773.0261725936396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448399326483.8/warc/CC-MAIN-20151124210846-00048-ip-10-71-132-137.ec2.internal.warc.gz"}
http://clay6.com/qa/336/a-manufacturer-produces-three-products-which-he-sells-in-two-markets-annual
Browse Questions Home  >>  CBSE XII  >>  Math  >>  Matrices # A manufacturer produces three products $x, y, z$ which he sells in two markets. Annual sales are indicated below: $\begin{array} { c c } \textbf{Market} & \textbf{Products} \\ I & 10,000 \quad 2,000 \quad 18,000 \\ II & 6,000 \quad 20,000 \quad 8,000 \end{array}$ If unit sale prices of x, y and z are Rs 2.50, Rs 1.50 and Rs 1.00, respectively, find the total revenue in each market This question has multiple parts. Therefore each part has been answered as a separate question on Clay6.com Toolbox: • If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix whose entries are given by dot product of the corresponding row of A and the corresponding column of B: $\begin{bmatrix}AB\end{bmatrix}_{i,j} = A_{i,1}B_{1,j} + A_{i,2}B_{2,j} + A_{i,3}B_{3,j} ... A_{i,n}B_{n,j}$ step 1: (a)Matrix for the products x,y,z is $\begin{array} { c c } x & y & z \\ 10,000 & 2,000 & 18,000 \\ 6,000 & 20,000 & 8,000 \end{array}$ Matrix corresponding to sale price of each product$\begin{array}{1 1}x\\y\\z\end{array}\begin{bmatrix}2.50\\1.50\\1.00\end{bmatrix}$ The revenue collected by the market is given by$\begin{bmatrix}10000 & 2000 & 18000\\6000 & 20000 & 8000\end{bmatrix}\begin{bmatrix}2.50\\1.50\\1.00\end{bmatrix}$ Step 2: Multiply each row with the column $\begin{bmatrix}25000+3000+18000\\15000+30000+8000\end{bmatrix}$ $\begin{bmatrix}46000\\53000\end{bmatrix}$ Thus Revenue in each market is Rs 46000 and 53000. Total Revenue=46000+53000=Rs 99000 edited Mar 19, 2013
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6824454069137573, "perplexity": 877.8768054544206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720062.52/warc/CC-MAIN-20161020183840-00123-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.maplesoft.com/support/help/MapleSim/view.aspx?path=componentLibrary/magnetic/fundamentalWave/basicMachines/synchronousInductionMachines/SM_ReluctanceRotor
Fundamental Wave SM_Reluctance Rotor - MapleSim Help Home : Support : Online Help : MapleSim : MapleSim Component Library : Magnetic : Fundamental Wave : Basic Machines : Synchronous Machines : componentLibrary/magnetic/fundamentalWave/basicMachines/synchronousInductionMachines/SM_ReluctanceRotor Fundamental Wave SM_Reluctance Rotor Synchronous induction machine with reluctance rotor and damper cage Description The Fundamental Wave SM Reluctance Rotor (or SM Reluctance Rotor) component models a synchronous reluctance rotor induction machine with a damper cage. The symmetry of the stator is assumed. The model takes the following loss effects into account: • heat losses in the temperature dependent stator winding resistances • optional, when enabled: heat losses in the temperature dependent damper cage resistances • friction losses • core losses (only eddy current losses, no hysteresis losses) Connections Name Description Modelica ID $\mathrm{powerBalance}$ Power balance powerBalance $\mathrm{flange}$ Shaft flange $\mathrm{support}$ Support at which the reaction torque is acting support ${\mathrm{plug}}_{\mathrm{sp}}$ Positive plug of stator plug_sp ${\mathrm{plug}}_{\mathrm{sn}}$ Negative plug of stator plug_sn $\mathrm{thermalPort}$ Thermal port of induction machines thermalPort $\mathrm{internalThermalPort}$ internalThermalPort $\mathrm{internalSupport}$ internalSupport $\mathrm{ir}$ Damper cage currents ir $\mathrm{damperCageLossPower}$ Damper losses damperCageLossPower Parameters General Parameters Name Default Units Description Modelica ID $m$ $3$ Number of stator phases m ${J}_{r}$ $0.29$ $\mathrm{kg}{m}^{2}$ Rotor moment of inertia Jr Use Support Flange $\mathrm{false}$ True (checked) means stator support is enabled useSupport ${J}_{s}$ ${J}_{r}$ $\mathrm{kg}{m}^{2}$ Stator moment of inertia Js Use Thermal Port $\mathrm{false}$ True (checked) means heat port is enabled useThermalPort $p$ $2$ Number of pole pairs (Integer) p ${f}_{s,\mathrm{nom}}$ $50$ $\mathrm{Hz}$ Nominal frequency fsNominal ${T}_{s,\mathrm{oper}}$ $293.15$ $K$ Operational temperature of stator resistance TsOperational Effective Stator Turns $1$ Effective number of stator turns effectiveStatorTurns ${T}_{r,\mathrm{oper}}$ $293.15$ $K$ Operational temperature of (optional) damper cage TrOperational Losses Parameters Name Default Units Description Modelica ID Friction Parameters Friction loss parameter record frictionParameters Stator Core Parameters Stator core loss parameter record; all parameters refer to stator side statorCoreParameters Stray Load Parameters Stray load loss parameter record strayLoadParameters Nominal Resistances And Inductances Parameters Name Default Units Description Modelica ID ${R}_{s}$ $0.03$ $\mathrm{\Omega }$ Warm damper resistance in q-axis Rs ${T}_{s,\mathrm{ref}}$ $293.15$ $K$ Reference temperature of stator resistance TsRef ${\mathrm{\alpha }}_{s}$ $0$ $\frac{1}{K}$ Temperature coefficient of stator resistance at 20 degC alpha20s ${L}_{s\sigma }$ $\frac{1}{20\mathrm{\pi }{f}_{s,\mathrm{nom}}}$ $H$ Stator stray inductance per phase Lssigma ${L}_{\mathrm{s0}}$ ${L}_{s\sigma }$ $H$ Stator zero inductance Lszero ${L}_{\mathrm{md}}$ $\frac{9}{20\mathrm{\pi }{f}_{s,\mathrm{nom}}}$ $H$ Main field inductance in d-axis Lmd ${L}_{\mathrm{mq}}$ $\frac{29}{20\mathrm{\pi }{f}_{s,\mathrm{nom}}}$ $H$ Main field inductance in q-axis Lmq ${L}_{r\sigma d}$ $\frac{1}{40\mathrm{\pi }{f}_{s,\mathrm{nom}}}$ $H$ Rotor leakage inductance, d-axis, w.r.t. stator side Lrsigmad ${L}_{r\sigma q}$ ${L}_{r\sigma d}$ $H$ Rotor leakage inductance, q-axis, w.r.t. stator side Lrsigmaq ${R}_{\mathrm{rd}}$ $0.04$ $\mathrm{\Omega }$ Rotor resistance, d-axis, w.r.t. stator side Rrd ${R}_{\mathrm{rq}}$ ${R}_{\mathrm{rd}}$ $\mathrm{\Omega }$ Rotor resistance, q-axis, w.r.t. stator side Rrq ${T}_{r,\mathrm{ref}}$ $293.15$ $K$ Reference temperature of damper resistances in d- and q-axis TrRef ${\mathrm{\alpha }}_{r}$ $0$ $\frac{1}{K}$ Temperature coefficient of damper resistances in d- and q-axis alpha20r Use Damper Cage $\mathrm{true}$ True (checked) means damper cage is enabled useDamperCage Modelica Standard Library The component described in this topic is from the Modelica Standard Library. To view the original documentation, which includes author and copyright information, click here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 72, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6669595241546631, "perplexity": 21149.801004340046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00539.warc.gz"}
http://mathhelpforum.com/trigonometry/105785-another-trigonometric-inequality.html
# Math Help - another trigonometric inequality 1. ## another trigonometric inequality Find the set of values which satisfy the inequality sin x < sqrt{3} cos x for $0 \leq x \leq 360$ Ok , i solve it first sin x = sqrt{3} cos x tan x = sqrt{3} x= 60 , 240 Then it has 4 subintervals ie 0 , 60 , 240 , 360 And it tested each and got this solutin 0 <= 60 and 240<x<=360 Am i correct ? 2. Originally Posted by thereddevils Find the set of values which satisfy the inequality sin x < sqrt{3} cos x for $0 \leq x \leq 360$ Ok , i solve it first sin x = sqrt{3} cos x tan x = sqrt{3} x= 60 , 240 Then it has 4 subintervals ie 0 , 60 , 240 , 360 And it tested each and got this solutin 0 <= x < 60 and 240<x<=360 Am i correct ? Yes (but a small correction of some typos is in red).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9920482635498047, "perplexity": 3886.3395168745215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133485.50/warc/CC-MAIN-20140914011213-00126-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}