text
stringlengths
100
356k
fuzzylite  6.0 A Fuzzy Logic Control Library in C++ fl::Aggregated Class Reference The Aggregated class is a special Term that stores a fuzzy set with the Activated terms from the Antecedents of a Rule, thereby serving mainly as the fuzzy output value of the OutputVariables. More... #include <Aggregated.h> Inheritance diagram for fl::Aggregated: Collaboration diagram for fl::Aggregated: ## Public Member Functions Aggregated (const std::string &name="", scalar minimum=fl::nan, scalar maximum=fl::nan, SNorm *aggregation=fl::null) Aggregated (const Aggregated &other) Aggregatedoperator= (const Aggregated &other) virtual ~Aggregated () FL_IOVERRIDE virtual std::string className () const FL_IOVERRIDE Returns the name of the class of the term. More... virtual std::string parameters () const FL_IOVERRIDE Returns the parameters of the term. More... virtual void configure (const std::string &parameters) FL_IOVERRIDE Does nothing. More... virtual Aggregatedclone () const FL_IOVERRIDE Creates a clone of the term. More... virtual Complexity complexity () const FL_IOVERRIDE Computes the estimated complexity of evaluating the membership function. More... virtual Complexity complexityOfMembership () const virtual Complexity complexityOfActivationDegree () const virtual scalar membership (scalar x) const FL_IOVERRIDE Aggregates the membership function values of $$x$$ utilizing the aggregation operator. More... virtual scalar activationDegree (const Term *forTerm) const Computes the aggregated activation degree for the given term. More... virtual const ActivatedhighestActivatedTerm () const Iterates over the Activated terms to find the term with the maximum activation degree. More... virtual std::string toString () const FL_IOVERRIDE Returns the representation of the term in the FuzzyLite Language. More... virtual void setMinimum (scalar minimum) Sets the minimum of the range of the fuzzy set. More... virtual scalar getMinimum () const Gets the minimum of the range of the fuzzy set. More... virtual void setMaximum (scalar maximum) Sets the maximum of the range of the fuzzy set. More... virtual scalar getMaximum () const Gets the maximum of the range of the fuzzy set. More... virtual void setRange (scalar minimum, scalar maximum) Sets the range of the fuzzy set to [minimum, maximum] More... virtual scalar range () const Returns the magnitude of the range of the fuzzy set,. More... virtual void setAggregation (SNorm *aggregation) Sets the aggregation operator. More... virtual SNormgetAggregation () const Gets the aggregation operator. More... virtual void addTerm (const Term *term, scalar degree, const TNorm *implication) Adds a new Activated term (from the parameters) to the fuzzy set. More... virtual void addTerm (const Activated &term) Adds the activated term to the fuzzy set. More... virtual const ActivatedgetTerm (std::size_t index) const Gets the term at the given index. More... virtual const ActivatedremoveTerm (std::size_t index) Removes the term at the given index without deleting the term. More... virtual std::size_t numberOfTerms () const Returns the number of activated terms. More... virtual void setTerms (const std::vector< Activated > &terms) Sets the activated terms. More... virtual const std::vector< Activated > & terms () const Returns an immutable vector of activated terms. More... virtual std::vector< Activated > & terms () Returns a mutable vector of activated terms. More... virtual bool isEmpty () const Indicates whether the vector of activated terms is empty. More... virtual void clear () Clears and deletes the activated terms. More... Public Member Functions inherited from fl::Term Term (const std::string &name="", scalar height=1.0) virtual ~Term () virtual void setName (const std::string &name) Sets the name of the term. More... virtual std::string getName () const Gets the name of the term. More... virtual void setHeight (scalar height) Sets the height of the term. More... virtual scalar getHeight () const Gets the height of the term. More... virtual void updateReference (const Engine *engine) Updates the references (if any) to point to the current engine (useful when cloning engines or creating terms within Importer objects. More... Protected Attributes inherited from fl::Term scalar _height ## Detailed Description The Aggregated class is a special Term that stores a fuzzy set with the Activated terms from the Antecedents of a Rule, thereby serving mainly as the fuzzy output value of the OutputVariables. The ownership of the activated terms will be transfered to objects of this class, and therefore their destructors will be called upon destruction of this term (or calling Aggregated::clear()). Antecedent Rule OutputVariable Activated Term Since 6.0 Definition at line 47 of file Aggregated.h. ## ◆ Aggregated() [1/2] fl::Aggregated::Aggregated ( const std::string & name = "", scalar minimum = fl::nan, scalar maximum = fl::nan, SNorm * aggregation = fl::null ) explicit ## ◆ Aggregated() [2/2] fl::Aggregated::Aggregated ( const Aggregated & other ) ## ◆ ~Aggregated() virtual fl::Aggregated::~Aggregated ( ) virtual ## ◆ activationDegree() virtual scalar fl::Aggregated::activationDegree ( const Term * forTerm ) const virtual Computes the aggregated activation degree for the given term. If the same term is present multiple times, the aggregation operator is utilized to sum the activation degrees of the term. If the aggregation operator is fl::null, a regular sum is performed. Parameters forTerm is the term for which to compute the aggregated activation degree Returns the aggregated activation degree for the given term virtual void fl::Aggregated::addTerm ( const Term * term, scalar degree, const TNorm * implication ) virtual Adds a new Activated term (from the parameters) to the fuzzy set. Parameters term is the activated term degree is the activation degree implication is the implication operator virtual void fl::Aggregated::addTerm ( const Activated & term ) virtual Adds the activated term to the fuzzy set. The activated term will be deleted when Aggregated::clear() Parameters term is the activated term ## ◆ className() virtual std::string fl::Aggregated::className ( ) const virtual Returns the name of the class of the term. Returns the name of the class of the term Implements fl::Term. ## ◆ clear() virtual void fl::Aggregated::clear ( ) virtual Clears and deletes the activated terms. ## ◆ clone() virtual Aggregated* fl::Aggregated::clone ( ) const virtual Creates a clone of the term. Returns a clone of the term Implements fl::Term. ## ◆ complexity() virtual Complexity fl::Aggregated::complexity ( ) const virtual Computes the estimated complexity of evaluating the membership function. Returns the estimated complexity of evaluating the membership function Implements fl::Term. ## ◆ complexityOfActivationDegree() virtual Complexity fl::Aggregated::complexityOfActivationDegree ( ) const virtual ## ◆ complexityOfMembership() virtual Complexity fl::Aggregated::complexityOfMembership ( ) const virtual ## ◆ configure() virtual void fl::Aggregated::configure ( const std::string & parameters ) virtual Does nothing. Parameters parameters are irrelevant Implements fl::Term. ## ◆ getAggregation() virtual SNorm* fl::Aggregated::getAggregation ( ) const virtual Gets the aggregation operator. Returns the aggregation operator ## ◆ getMaximum() virtual scalar fl::Aggregated::getMaximum ( ) const virtual Gets the maximum of the range of the fuzzy set. Returns the maximum of the range of the fuzzy set ## ◆ getMinimum() virtual scalar fl::Aggregated::getMinimum ( ) const virtual Gets the minimum of the range of the fuzzy set. Returns the minimum of the range of the fuzzy set ## ◆ getTerm() virtual const Activated& fl::Aggregated::getTerm ( std::size_t index ) const virtual Gets the term at the given index. Parameters index is the index of the term Returns the activated term at the given index ## ◆ highestActivatedTerm() virtual const Activated* fl::Aggregated::highestActivatedTerm ( ) const virtual Iterates over the Activated terms to find the term with the maximum activation degree. Returns the term with the maximum activation degree ## ◆ isEmpty() virtual bool fl::Aggregated::isEmpty ( ) const virtual Indicates whether the vector of activated terms is empty. Returns whether the vector of activated terms is empty ## ◆ membership() virtual scalar fl::Aggregated::membership ( scalar x ) const virtual Aggregates the membership function values of $$x$$ utilizing the aggregation operator. Parameters x is a value Returns $$\sum_i{\mu_i(x)}, i \in \mbox{terms}$$ Implements fl::Term. ## ◆ numberOfTerms() virtual std::size_t fl::Aggregated::numberOfTerms ( ) const virtual Returns the number of activated terms. Returns the number of activated terms ## ◆ operator=() Aggregated& fl::Aggregated::operator= ( const Aggregated & other ) ## ◆ parameters() virtual std::string fl::Aggregated::parameters ( ) const virtual Returns the parameters of the term. Returns "aggregation minimum maximum terms" Implements fl::Term. ## ◆ range() virtual scalar fl::Aggregated::range ( ) const virtual Returns the magnitude of the range of the fuzzy set,. Returns the magnitude of the range of the fuzzy set, i.e., maximum - minimum ## ◆ removeTerm() virtual const Activated& fl::Aggregated::removeTerm ( std::size_t index ) virtual Removes the term at the given index without deleting the term. Parameters index is the index of the term Returns the removed term ## ◆ setAggregation() virtual void fl::Aggregated::setAggregation ( SNorm * aggregation ) virtual Sets the aggregation operator. Parameters aggregation is the aggregation operator ## ◆ setMaximum() virtual void fl::Aggregated::setMaximum ( scalar maximum ) virtual Sets the maximum of the range of the fuzzy set. Parameters maximum is the maximum of the range of the fuzzy set ## ◆ setMinimum() virtual void fl::Aggregated::setMinimum ( scalar minimum ) virtual Sets the minimum of the range of the fuzzy set. Parameters minimum is the minimum of the range of the fuzzy set ## ◆ setRange() virtual void fl::Aggregated::setRange ( scalar minimum, scalar maximum ) virtual Sets the range of the fuzzy set to [minimum, maximum] Parameters minimum is the minimum of the range of the fuzzy set maximum is the maximum of the range of the fuzzy set ## ◆ setTerms() virtual void fl::Aggregated::setTerms ( const std::vector< Activated > & terms ) virtual Sets the activated terms. Parameters terms is the activated terms ## ◆ terms() [1/2] virtual const std::vector& fl::Aggregated::terms ( ) const virtual Returns an immutable vector of activated terms. Returns an immutable vector of activated terms ## ◆ terms() [2/2] virtual std::vector& fl::Aggregated::terms ( ) virtual Returns a mutable vector of activated terms. Returns a mutable vector of activated terms ## ◆ toString() virtual std::string fl::Aggregated::toString ( ) const virtual Returns the representation of the term in the FuzzyLite Language. Returns the representation of the term in FuzzyLite Language FllExporter Reimplemented from fl::Term. The documentation for this class was generated from the following file:
How to copy one column from many files to a new file? 1 0 Entering edit mode 11 months ago Seq225 ▴ 90 Hello I have 200 files. Each has 2 columns. I want to copy the first column from each of the 200 files into a new file. The new file will have 200 columns. How to do this (using awk/cut/paste etc). I have tried awk, but putting it in a for loop is difficult and I have failed to do so. sequence genome assembly • 678 views 2 Entering edit mode It would help if you could provide a sample. Is it a plain text file, does it contain non-ascii characters? Are there headers that you want to preserve? What is the separator between columns? Are there spaces in the entries? 2 Entering edit mode 11 months ago I know you wanted an awk/cut solution, but this Rscript does the trick. Save it to a folder as script.R that has the 200 txt files and it'll automatically merge them and save it as "merged.txt" Rscript script.R if(!require("tidyverse")) install.packages("tidyverse") files <- list.files(pattern = "\\.txt\$") files <- purrr::map(files, ~ .x[1]) files <- dplyr::bind_cols(files) readr::write_csv(files, path = "merged.txt", col_names = FALSE) 0 Entering edit mode Thanks a ton! It works!! 0 Entering edit mode Seq225 : Besides thanking posters, please accept answers (green check mark) to validate them and provide closure to the thread. Do this for your past questions as well. You can accept more than one answer if they work.
# Thread: minimum Dimensions of an open box 1. ## minimum Dimensions of an open box 1. The problem statement, all variables and given/known data Determine the dimensions of a rectangular box, open at the top, having volume 4 m3, and requiring the least amount of material for its construction. Use the second partials test. (Hint: Take advantage of the symmetry of the problem.) 2. Relevant equations 3. The attempt at a solution Volume, V= 4m^3 let x = length y = width z = height 4m^3 = xyz x = 4/yz because it is an open at the top rectangular box, the Surface, S = 2xz + 2yz + xy substitute x=4/yz inside the surface equation.. S = 8/y + 2yz + 4/z to find the critical points, take the partial with respect to y and z.. the equal it to zero.. S'(y)= 2z - 8/y^2 S'(z)= 2y - 4/z^2 solve the equations, i get, when y = 0, z = 0.. y = 8, z = 1/16 the problem is, what should i do next? 2. Originally Posted by nameck 1. The problem statement, all variables and given/known data Determine the dimensions of a rectangular box, open at the top, having volume 4 m3, and requiring the least amount of material for its construction. Use the second partials test. (Hint: Take advantage of the symmetry of the problem.) 2. Relevant equations 3. The attempt at a solution Volume, V= 4m^3 let x = length y = width z = height 4m^3 = xyz x = 4/yz because it is an open at the top rectangular box, the Surface, S = 2xz + 2yz + xy substitute x=4/yz inside the surface equation.. S = 8/y + 2yz + 4/z to find the critical points, take the partial with respect to y and z.. the equal it to zero.. S'(y)= 2z - 8/y^2 S'(z)= 2y - 4/z^2 solve the equations, i get, when y = 0, z = 0.. y = 8, z = 1/16 the problem is, what should i do next? The solutions are $y=2\,,\,z=1\,\Longrightarrow x=2$ and you're done (just check why you got y = 8...??) Perhaps you oversaw the fact that you're positioning your box at the origin in the 3-dimensional space, with sides parallel to the three axis. Then, x = 2 is the length of the box's side along the x-axis, y = 2 the length along the y-axis and etc. If you already studied Lagrange's Multipliers then you can double-check with them that the answer indeed is the above one. Tonio 3. ok.. i repaired my calculation.. got problem with the simultaneous equation.. i got ur answer where y=2 and z=1.. the i substitute the value into xyz=4.. i got x=2.. however, is this my final answer? should i use second partial test? 4. Originally Posted by nameck ok.. i repaired my calculation.. got problem with the simultaneous equation.. i got ur answer where y=2 and z=1.. the i substitute the value into xyz=4.. i got x=2.. however, is this my final answer? should i use second partial test? Well...yes, in order to be formal. You'll see that the Hessian is a positive definite matrix and thus you got a minimum (or $f_{yy}>0\,,\,\,f_{yy}f_{zz}-(f_{yz})^2>0$) when evaluated at $(2,2,1)$ Tonio 5. nice! i got it now tonio!! thanks =) ### open boxmath with dimensions Click on a term to search for related topics.
# Riddles of the Sphinx ## Martin Gardner This book charms, informs, inspires, puzzles and delights, and the reader can dip in almost anywhere and get hooked by the natural lucidity of style and the friendly tone which are so characteristic of Martin Gardner. The book is well presented and beautifully printed. The book will enrich any bookshelf. — Gregory D. Economides in Mathematical Spectrum The author did it again! He produced another delightful, charming book of puzzles! An impressive variety of topics includes mathematics (as deep as the Erdös-Szekeres theorem!), physics, astronomy, geology, linguistics, etc. — A. Soifer in Zentralblatt für Mathematik The first (puzzle) provides an absolutely fascinating start to the book and leaves the reader drooling for more. It is impossible to detract from Gardner's brilliance or from the pleasure to be had from this collection. — Victor Bryant in New Scientist Martin Gardner begins Riddles with questions about splitting up polygons into prescribed shapes and he ends this book with an offer of a prize of $100 for the first person to send him a 3 x 3 magic square consisting of consecutive primes. Only Gardner could fit so many diverse and tantalizing problems into one book. This material was drawn from Gardner's column in Issac Asimov's Science Fiction Magazine. His riddles presented here incorporate the responses of his initial readers, along with additions suggested by the editors of this series. In this book, Gardner draws us from questions to answers, always presenting us with new riddles — some as yet unanswered. Solving these riddles is not simply a matter of logic and calculation, though these play a role. Luck and inspiration are factors as well, so beginners and experts alike may profitably exercise their wits on Gardner's problems, whose subjects range from geometry to word play to questions relating to physics and geology. We guarantee that you will solve some of these riddles, be stumped by others and be amused by almost all of the stories and setting that Gardner has devised to raise these as yet unanswered. Print on demand (POD) books are not returnable because they are printed at your request. Damaged books will, of course, be replaced (customer support information will be on your receipt). Electronic ISBN: 9780883859476 Print ISBN: 9780883856321 PDF Price POD Price Riddles of the Sphinx$15.00 \$36.00 Tags: Book Series:
Serving the Quantitative Finance Community • 1 • 5 • 6 • 7 • 8 • 9 tagoma Posts: 18781 Joined: February 21st, 2010, 12:58 pm ### Excel tricks QuoteOriginally posted by: edouardi have fornight data (2 data for each month) to put in a spreadsheet.are there fancy things Excel is able of when it comes to deal with dates such as "1H Jan-14"???OH! tagoma Posts: 18781 Joined: February 21st, 2010, 12:58 pm ### Excel tricks Hi all.Is there a built in function or a simple trick to delete the content of all cells (say in a selection) that are NOT formulas?(something less ugly than a macro deleting the content of cells that do no contain the '=' character) Last edited by tagoma on February 9th, 2014, 11:00 pm, edited 1 time in total. bearish Posts: 6631 Joined: February 3rd, 2011, 2:19 pm ### Excel tricks The Excel VBA Range object has a property HasFormula, which is either true or false. By looping through each cell in your selection, you should be able to selectively delete the ones that come up false on this test. MHill Posts: 489 Joined: February 26th, 2010, 11:32 pm ### Excel tricks QuoteOriginally posted by: edouardHi all.Is there a built in function or a simple trick to delete the content of all cells (say in a selection) that are NOT formulas?(something less ugly than a macro deleting the content of cells that do no contain the '=' character)Select your cellsCtrl + G (Go To command)Choose 'Special'Choose 'Formulas''OK'Delete Edit: Doh! re-read the question - choose 'Constants', not 'Formulas' Last edited by MHill on February 10th, 2014, 11:00 pm, edited 1 time in total. bearish Posts: 6631 Joined: February 3rd, 2011, 2:19 pm ### Excel tricks QuoteOriginally posted by: MHillQuoteOriginally posted by: edouardHi all.Is there a built in function or a simple trick to delete the content of all cells (say in a selection) that are NOT formulas?(something less ugly than a macro deleting the content of cells that do no contain the '=' character)Select your cellsCtrl + G (Go To command)Choose 'Special'Choose 'Formulas''OK'Delete Edit: Doh! re-read the question - choose 'Constants', not 'Formulas'Nice! It actually came in handy today in tracking down cells that suffered from conditional formatting (a potential source of many headaches). tagoma Posts: 18781 Joined: February 21st, 2010, 12:58 pm ### Re: Excel tricks Ok. So I inherited of an old spreadsheet that was passed from hand to hand over time. At opening this spreadsheet seems to try connecting to another spreadsheet as it shows "Linking: [filename.xls] ..." at the bottom of Excel. As you guessed, I have no clue what that other spreadsheet is. I have found different "solutions" to remove this link (inspection of all types of objects in the spreadsheet, find/recplace, ...) on the internet but none has worked for me. But maybe you guys have some day had the same issue and actually found a solution? Many thanks. tagoma Posts: 18781 Joined: February 21st, 2010, 12:58 pm ### Re: Excel tricks Ok. So I inherited of an old spreadsheet that was passed from hand to hand over time. At opening this spreadsheet seems to try connecting to another spreadsheet as it shows "Linking: [filename.xls] ..." at the bottom of Excel. As you guessed, I have no clue what that other spreadsheet is. I have found different "solutions" to remove this link (inspection of all types of objects in the spreadsheet, find/recplace, ...) on the internet but none has worked for me. But maybe you guys have some day had the same issue and actually found a solution? Many thanks. Ok, so I finally found a solution. Actually, some Excel objects of type Name had their propery Visible set to False, hence this named Excel objects were not shown in the Name Manager. From there, loop over all Name objects and set them to visible. Voilà! tagoma Posts: 18781 Joined: February 21st, 2010, 12:58 pm ### Re: Excel tricks What is the most straighforward way to run/launch a (python) script from within Excel? Something around I would click on a Excel form control/activeX and it would run the script. (the python code doesn't need to be embedded into the spreadsheet. both spreadsheet and python scripts are stored and will stay on the same server). Your great suggestions very welcome! Thank you. Alan Posts: 10713 Joined: December 19th, 2001, 4:01 am Location: California Contact: ### Re: Excel tricks Rarely use Excel, but maybe this? bearish Posts: 6631 Joined: February 3rd, 2011, 2:19 pm ### Re: Excel tricks A lower tech solution would be to invoke it via a shell() command from a VBA sub. It’s an asynchronous call and you can’t directly communicate data from (or back to) the workbook, but it shouldn’t be too hard to work around that. tagoma Posts: 18781 Joined: February 21st, 2010, 12:58 pm ### Re: Excel tricks thank you to both of you for suggestions. I do appreciate. so I first went down the xlwings path. but setting the add-in, the Excel references, etc ... making sure all stars are well aligned is was bit annoying and didn't quite work. (don't get me wrong i like xlwings that I use intensively in other ways) so I went down the shell way, based on this SO reply. i ended up with something like: Function RunPython() Dim obj As Object Set obj = CreateObject("WScript.Shell") RunPython = obj.Run("pythonw.exe C:/ ..../my_great_snippet.py", 0, True) End Function Cuchulainn Posts: 64996 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Excel tricks xlwings is kind of stone age. Clunky. If your use cases are I/O in Excel (42!) then openpyxl might be an idea. The Excel model is a bit like VBA and C#. Your specification doesn't say what Excel and Python are doing; are you using Excel as a database to be updated by Python code? But maybe your use case is different. from openpyxl import Workbook import datetime wb = Workbook() ws = wb.active ws1 = wb.create_sheet("Mysheet") ws1 = wb.create_sheet("Tagoma") wb.save("C:\ABC\XYZ\DD.xlsx") ws1 = wb.create_sheet("Tagoma2") ws1['A1'] = datetime.datetime(2021, 8, 29) ws1.append([1, 2, 3,4]) print(wb.sheetnames) wb.save("C:\ABC\XYZ\DD.xlsx") from openpyxl import Workbook wb = Workbook() # grab the active worksheet ws = wb.active # Data can be assigned directly to cells ws['A1'] = 42 # Rows can also be appended ws.append([1, 2, 3,4]) # Python types will automatically be converted import datetime #ws['A2'] = datetime.datetime.now() # Save the file wb.save("C:\ABC\XYZ\sample.xlsx") "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl tagoma Posts: 18781 Joined: February 21st, 2010, 12:58 pm ### Re: Excel tricks Hi Cuch. Thank you for suggestion. Well my whole project is ill-designed due to many constraints from company's IT, users, management, etc.... I acquire data and perform a bunch of calculations on them with a python snippet, everything is then saved in a csv file. Another python snippet  then cherry picks data in that csv file and fills specific cells of specific sheets of the Excel workbook. There are other items in the spreadsheet  that I have to check manually. That is the spreadsheet has to be opened on my computer. Hence I thought maybe it would alleviate a bit main pain if I launch the python scripts from witihin the spreadsheet instead of searching the folder the are saved in  (I warned above this project is sick....) The Shell solution (bearish) I have implemented does the job. Can you please explain a bit why you think xlwings is clumsy? I mean maybe there are some use cases I shall aboid to use it. Thank you! Cuchulainn Posts: 64996 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Excel tricks I was confusing xlwings with XLw https://xlw.github.io/ So, I take it back Still, maybe a bit of redesign is an idea......... many constraints from company's IT non-sensical constraints? "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl bearish Posts: 6631 Joined: February 3rd, 2011, 2:19 pm ### Re: Excel tricks I was confusing xlwings with XLw https://xlw.github.io/ So, I take it back Still, maybe a bit of redesign is an idea......... many constraints from company's IT non-sensical constraints? As we discussed elsewhere, sometimes the ball of mud gets the job done.
Quadratic vector fields in the plane have a finite number of limit cycles Publications Mathématiques de l'IHÉS, Tome 64 (1986) , pp. 111-142. @article{PMIHES_1986__64__111_0, author = {Bam\'on, Rodrigo}, title = {Quadratic vector fields in the plane have a finite number of limit cycles}, journal = {Publications Math\'ematiques de l'IH\'ES}, pages = {111--142}, publisher = {Institut des Hautes \'Etudes Scientifiques}, volume = {64}, year = {1986}, zbl = {0625.58028}, mrnumber = {88d:58095}, language = {en}, url = {http://archive.numdam.org/item/PMIHES_1986__64__111_0/} } Bamon, Rodrigo. Quadratic vector fields in the plane have a finite number of limit cycles. Publications Mathématiques de l'IHÉS, Tome 64 (1986) , pp. 111-142. http://archive.numdam.org/item/PMIHES_1986__64__111_0/ [A] A. Andronov et al., Qualitative Theory of Second Order Dynamical Systems, John Wiley & Sons, New York, 1973. | MR 50 #2619 | Zbl 0282.34022 [C] W. Coppel, A survey of Quadratic Systems, Journal of Differential Equations, 2 (1966), 293-304. | MR 33 #4374 | Zbl 0143.11903 [Ca] J. Carr, Applications of Center Manifold Theory, Applied Math. Sciences, 35, Springer-Verlag, 1981. | MR 83g:34039 | Zbl 0464.58001 [Ch-S] C. Chicone and S. Schafer, Separatrix and Limit Cycles of Quadratic Systems and Dulac's Theorem, Transactions Amer. Math. Soc., 278 (1983), 585-612. | MR 84m:58110 | Zbl 0522.58041 [D] M. H. Dulac, Sur les cycles limites, Bull. Soc. Math. France, 51 (1923), 45-188. | JFM 49.0304.01 | Numdam [Du] F. Dumortier, Singularities of Vector Fields, Journal of Differential Equations, 23, I (1977), 53-106. | MR 58 #31276 | Zbl 0346.58002 [H-P-S] M. Hirsch, C. Pugh and M. Shub, Invariant Manifolds, Springer Lecture Notes in Math., 583 (1977). | MR 58 #18595 | Zbl 0355.58009 [I] Yu. S. Il'Yašenko, Limit cycles of polynomial vector fields with non degenerate singular points in the real plane (in Russian), Functional Analysis and its applications, 18 (3) (1984), 32-34 (in translation : 18 (3) (1985), 199-209). | Zbl 0549.34033 [P-L1] I. G. Petrovskii and E. U. Landis, On the number of limit cycles of the equation dy/dx = P(x,y)/Q(x,y) where P and Q are polynomials of the second degree, Amer. Math. Soc. Transl. (2), 16 (1958), 177-221. | MR 20 #1036 | Zbl 0080.07502 [P-L2] I. G. Petrovskii and E. U. Landis, On the number of limit cycles of the equation dy/dx = P(x,y)/Q(x,y) where P and Q are polynomials, Amer. Math. Soc. Transl. (2), 14 (1960), 181-200. | MR 22 #3854 | Zbl 0094.06304 [P-L3] I. G. Petrovskii and E. U. Landis, Corrections to the articles : "On the number of limit cycles of the equation dy/dx = P(x,y)/Q(x,y) where P and Q are polynomials", Math. Sb.N.S., 48 (90) (1959). 253-255. | MR 23 #A1099 [P-M] J. Palis and W. De Melo, Geometric Theory of Dynamical Systems ; An Introduction, New York, Springer-Verlag, 1982. | MR 84a:58004 | Zbl 0491.58001 [S] J. Sotomayor, Curvas definidas por equaçoes diferenciais no plano, 13° Colóquio Bras. de Mat., IMPA, 1981. | MR 84m:34004 [Sh1] Shi Songling, A concrete example of the existence of four limit cycles for plane quadratic systems, Sci., Sinica, Ser. A, 23 (1980), 153-158. | MR 81f:34037 | Zbl 0431.34024 [Sh2] Shi Songling, A method for constructing cycles without contact around a weak focus, Journal of Differential Equations, 41 (1981), 301-312. | MR 83e:58074 | Zbl 0442.34029
##### Notes This printable reviews the Grade 3 Common Core Mathematics Standards. Use for review, test preparation, or benchmark assessment. ##### Print Instructions NOTE: Only your test content will print. To preview this test, click on the File menu and select Print Preview. See our guide on How To Change Browser Print Settings to customize headers and footers before printing. Print Test (Only the test content will print) ## Math Review 1. Ellie looked at the fish in the pet store. She counted exactly 6 fish in each aquarium. There were 7 aquariums. What was the total number of fish in the aquariums? 1. 63 2. 42 3. 54 4. 72 2. Allison's mother baked a dozen cookies for Allison and her 3 friends to share. Allison wanted each person to receive the same number of cookies. She used this equation to find the number of cookies each of them would receive. 12 =                 x 4 Which missing factor makes the equation true? 1. 3 2. 5 3. 9 4. 36 3. Leia needs to place the blocks shown into boxes. Each box holds 6 blocks. Which number sentence shows the number of boxes Leia needs? 1. $24 -: 4 = 6$ 2. $24 -: 4 = 8$ 3. $24 -: 6 = 6$ 4. $24 -: 6 = 4$ 4. Find the missing number. $72 -:9 = square$ $square xx9 = 72$ 1. 6 2. 7 3. 8 4. 9 5. Which multiplication property is shown? $5(3 + 8) = (5 xx 3) + (5 xx 8)$ 1. Identity 2. Distributive 3. Associative 4. Commutative 6. Which of the following could be used to solve this problem? Juan had a birthday party. He had 30 candy pieces to share with himself and four friends. How many pieces of candy would each person get? 1. thirty times 6 2. thirty plus 4 3. thirty divided by 5 4. thirty divided by 4 7. The numbers below make a pattern: $2, 5, 8, 11...$ Which number could be in this pattern? 1. 13 2. 15 3. 17 4. 19 8. Two hundred sixty-two students go to Westerbrook Elementary School. Round the number of students who go to Westerbrook to the nearest hundred. 1. 200 2. 260 3. 262 4. 300 9. The General Grocery Store sold 447 oranges and 281 apples on Friday. When each number is rounded to the nearest 10, which equation represents about how many total oranges and apples were sold? 1. $450 - 280 = 170$ 2. $400 - 300 = 100$ 3. $450 + 280 = 730$ 4. $400 + 300 = 700$ 10. The aquarium sold tickets to the polar bear exhibit on Saturday and Sunday. On Saturday, the aquarium sold 584 tickets. On Sunday, the aquarium sold 296 tickets. How many total tickets did the aquarium sell on Saturday and Sunday? 1. 770 2. 880 3. 900 4. 990 11. $30 xx 4 = 60 xx$ 1. 2 2. 3 3. 4 4. 5 12. Jerry draws lines to divide 4 different figures in half. His teacher asks him to shade $1/2$ of each figure. Which statement is true about Jerry's figures? 1 - 2 - 3 - 4 - 1. Only figures 1 and 4 are correct. 2. Only figure 2 is correct. 3. All the figures are correct. 4. Figures 2 and 3 are correct. 13. If you were to write a fraction representing the number 9 on the number line, what number would go in the denominator? 1. 1 2. 4 3. 9 4. 10 14. Which symbol should go on the line to compare the shaded parts of the fraction models? 1. $>$ 2. $<$ 3. $=$ 15. Which sentence is best represented by the fraction model? 1. One-half of the students at Rivers Elementary School buy lunch on Monday. 2. One-fourth of the students at Rivers Elementary School are in the third grade. 3. Two-thirds of the students at Rivers Elementary School play sports after school. 4. Three-fourths of the students at Rivers Elementary School ride the bus each day. 16. What time will it be in 23 minutes? 1. 2:28 2. 2:51 3. 3:14 4. 3:23 17. Kira buys a container of milk at the store. Which is the closest to the volume of milk Kira bought? 1. 1 gram 2. 1 liter 3. 10 grams 4. 10 liters 18. How many more days did it rain in May than June? 1. 2 2. 4 3. 6 4. 8 19. Which is the best estimate of the pencil's length in inches? 1. $1/2$ 2. $1 1/4$ 3. $1 1/2$ 4. $2 1/4$ 20. How many unit squares? 1. 11 2. 12 3. 13 4. 14 21. Parker makes the pattern shown with square tiles. He wants to rearrange his tiles and create two patterns so that he has no tiles left over. Which two patterns could Parker make? 22. Ms. Marble is deciding between Rug A and Rug B for her dining room. Each box equals 1 square foot. Rug A Rug B How much greater is the area Rug A than the area of Rug B? 1. 2 square feet 2. 4 square feet 3. 8 square feet 4. 16 square feet 23. Perimeter = 30 BC = 6 CD = 6 ED = 2 EF = 3 AF = 4 What is the length of AB? 1. 6 2. 7 3. 8 4. 9 24. Taneisha is comparing rhombuses, rectangles, and squares. What do they always have in common? 1. They always have 4 sides. 2. They always have equal angles. 3. They always have equal side lengths. 4. They always have different angle measures. 25. Which two shapes have the same number of equal parts? You need to be a HelpTeaching.com member to access free printables.
# Connecting to an FTPS Server with SSL Session Reuse in Java 7 and 8 “Good programmers write good code… Great programmers reuse great code.”  Or so I told myself as I snagged an Apache Commons class to connect to a new vendor’s FTPS server.  Several hours of debugging later, however, I realized to my dismay that the omnipotent Apache Commons did not support a major security feature required by most modern FTPS servers.  This post outlines my process for discovering the flaw and the steps I took to engineer a reliable patch; if, however, you’ve been desperately Googling for solutions to “SSL session reuse required” and are on your last straw, you can jump ahead to the solution here. Although we may not always like to admit it, no tech company is an island: we often find ourselves reliant on third party vendors for applications from marketing to compliance, and we need secure methods for transferring data between ourselves and these vendors. Here at Wealthfront, we like to automate this process as much as possible, so we set up periodic jobs that scrape, push and pull our data as needed (see 3 Ways to Integrate Third-Party Metrics into Your Data Platform). One vendor we began working with only supports data transfer over FTPS (no, not SFTP), a method we had not used in our data platform previously; so I set about building some simple infrastructure to programmatically connect to an FTPS server and upload or download files. With the Apache Commons class straight out of the box, I tried the following (where client is an FTPSClient, and 21 is the default FTPS server’s command port): client.connect(host, 21); client.listFiles(DATA_FOLDER); Upon listing files in the DATA_FOLDER, I received the following from the server: PORT xxx,…,xxx 500 Illegal PORT command. (you can print the server’s response with FTPSClient.addProtocolCommandListener(new PrintCommandListener(System.out))) I quickly found that the above response is indicative of an active FTP session, in which the client specifies a data port for the server to initiate a data connection to. Our vendor’s FTPS server, however, was configured for passive mode, in which the server specifies a data port for the client to connect to (in order to avoid a client’s firewall rejecting the server’s attempt to connect; see this post for further discussion on active vs. passive FTP). One can specify passive mode with FTPSClient.enterLocalPassiveMode(). In my next attempt to list files, I then received the following from the server: 522 Data connections must be encrypted. This simply indicates that I needed to specify a private session with the PROT P (for “private”) command; however, from the original spec on FTP over TLS (p. 9-10): “the PROT command MUST be preceded by a PBSZ command… For FTP-TLS… the PBSZ command MUST still be issued, but must have a parameter of ‘0’ to indicate that no buffering is taking place and the data connection should not be encapsulated.” So at this point my code contained the following series of commands: client.connect(host, 21); client.execPBSZ(0); client.execPROT("P"); client.enterLocalPassiveMode(); client.listFiles(DATA_FOLDER); Here’s where I hit my first non-trivial issue: 522 SSL connection failed; session reuse required: see require_ssl_reuse option in vsftpd.conf man page So it appears the vendor uses vsftpd to run their server, and after some research I discovered vsftpd (and most other FTPS servers) requires SSL session reuse between the control and data connections as a security measure: essentially, the server requires that the SSL session used for data transfer is the same as that used for the connection to the command port (port 21). This ensures that the party that initially authenticated is the same as the party sending or retrieving data, thereby preventing someone from hijacking a data connection after authentication in a classic man-in-the-middle attack. You can find the original blog post on the vsftpd patch here. Unfortunately the Apache Commons FTPSClient does not support this SSL session reuse behavior; in fact, there’s an open Apache NET Jira ticket to fix this exact issue. While that ticket remains open as of this writing, in the meantime, some folks went ahead and refactored the code to allow one to override a _prepareDataSocket_ method to hack the session reuse oneself (resolved Jira here). It appears that’s exactly what David Kocher over at Cyberduck has done in this revision to the open-source FTPS client. The Java Secure Socket Extension (JSSE) code is smart enough to reuse SSL sessions for the same host and port, but since the data port is different from the control port, one needs to artificially store the control session into the JSSE cache that is checked before generating a new SSL session. I simplified the Cyberduck code to remove any references to their context and created my own SSLSessionReuseFTPSClient: Briefly walking through the code: 1. _prepareDataSocket_ is called in FTPSClient._openDataConnection_ after calling the superclass’s method (FTPClient._openDataConnection_): protected Socket _openDataConnection_(String command, String arg) throws IOException { Socket socket = super._openDataConnection_(command, arg); this._prepareDataSocket_(socket); ... } 2. On line 21 (in v1_SSLSessionReuseFTPSClient.java) I retrieve the session associated with the socket passed in to _prepareDataSocket_; this is still the control session. 3. Next I retrieve the SSLSessionContext associated with this control session (line 22). This context contains the session cache that will be checked before generating a new SSL session; however the cache is a private field in SSLSessionContextImpl, so I need to use reflection to access it (note: this is bad practice and should only be used as a last resort hack) 4. After retrieving the session cache’s put method (line 27), I can finally store our control session into the cache using the data socket’s host name and port as the key (lines 29-30 generate this “name:port” key; line 31 stores the control session into the cache with this key). Using this v1_SSLSessionReuseFTPSClient, and the same connection code above, I was able to successfully retrieve and upload files to our vendor’s FTPS server… in Java 7. Unfortunately for me, the very next day we upgraded our data platform to Java 8 and to my dismay I once again saw: 522 SSL connection failed; session reuse required: see require_ssl_reuse option in vsftpd.conf man page To determine the discrepancy between the two JREs (1.7.0 and 1.8.0), I used a debugger on my connection code with each JRE to see differences in how the session cache is handled. From the getKickstartMessage() method in the ClientHandshaker class (sun.security.ssl): // // Try to resume an existing session. This might be mandatory, // given certain API options. // session = ((SSLSessionContextImpl)sslContext .engineGetClientSessionContext()) .get(getHostSE(), getPortSE()); The method engineGetClientSessionContext() returns an SSLSessionContext containing the private session cache; SSLSessionContextImpl.get concatenates its first and second arguments with a colon, then calls get on its private session cache with this concatenated key. The method getHostSE() is defined in the Handshaker class and calls getHost() in SSLSocketImpl. Here is where one can easily see the discrepancy between the two JREs: JRE 1.7.0 synchronized String getHost() { // Note that the host may be null or empty for localhost. if (host == null || host.length() == 0) { } return host; } JRE 1.8.0 synchronized String getHost() { // Note that the host may be null or empty for localhost. if (host == null || host.length() == 0) { if (!trustNameService) { // If the local name service is not trustworthy, reverse host // name resolution should not be performed for endpoint // identification. Use the application original specified } else { } } return host; } The boolean trustNameService is defined statically and defaults to false: * Is the local name service trustworthy? * * If the local name service is not trustworthy, reverse host name * resolution should not be performed for endpoint identification. */ static final boolean trustNameService = Debug.getBooleanProperty("jdk.tls.trustNameService", false); So in 1.7.0, the String returned by getHost() was the host name, something like “ec2-…compute.amazonaws.com”. In 1.8.0, however, the reverse host name resolution is prevented by default (beginning with Update 51), and getOriginalHostname returned the host ip, something like “123.45.67.89”. The problem with my code adapted from Cyberduck is that I always stored into the cache using host name as the first part of the key: final String key = String.format("%s:%s", socket.getInetAddress().getHostName(), String.valueOf(socket.getPort())).toLowerCase(Locale.ROOT); Solution One can make the code version-independent by calling the same getHost() method when storing into the session cache. Unfortunately this method is package-private, so I once again needed to use reflection to access it.  The final code for my SSLSessionReuseFTPSClient is below:
# 模型库概览¶ ## 评估环境¶ • CPU的评估环境基于骁龙855(SD855)。 • GPU评估环境基于V100和TensorRT,评估脚本如下。 #!/usr/bin/env bash export PYTHONPATH=$PWD:$PYTHONPATH python tools/infer/predict.py \ --model_file='pretrained/infer/model' \ --params_file='pretrained/infer/params' \ --enable_benchmark=True \ --model_name=ResNet50_vd \ --use_tensorrt=True \ --use_fp16=False \ --batch_size=1 ## 参考文献¶ [1] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778. [2] He T, Zhang Z, Zhang H, et al. Bag of tricks for image classification with convolutional neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 558-567. [3] Howard A, Sandler M, Chu G, et al. Searching for mobilenetv3[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 1314-1324. [4] Sandler M, Howard A, Zhu M, et al. Mobilenetv2: Inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 4510-4520. [5] Howard A G, Zhu M, Chen B, et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications[J]. arXiv preprint arXiv:1704.04861, 2017. [6] Ma N, Zhang X, Zheng H T, et al. Shufflenet v2: Practical guidelines for efficient cnn architecture design[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 116-131. [7] Xie S, Girshick R, Dollár P, et al. Aggregated residual transformations for deep neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1492-1500. [8] Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 7132-7141. [9] Gao S, Cheng M M, Zhao K, et al. Res2net: A new multi-scale backbone architecture[J]. IEEE transactions on pattern analysis and machine intelligence, 2019. [10] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9. [11] Szegedy C, Ioffe S, Vanhoucke V, et al. Inception-v4, inception-resnet and the impact of residual connections on learning[C]//Thirty-first AAAI conference on artificial intelligence. 2017. [12] Chollet F. Xception: Deep learning with depthwise separable convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1251-1258. [13] Wang J, Sun K, Cheng T, et al. Deep high-resolution representation learning for visual recognition[J]. arXiv preprint arXiv:1908.07919, 2019. [14] Chen Y, Li J, Xiao H, et al. Dual path networks[C]//Advances in neural information processing systems. 2017: 4467-4475. [15] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 4700-4708. [16] Tan M, Le Q V. Efficientnet: Rethinking model scaling for convolutional neural networks[J]. arXiv preprint arXiv:1905.11946, 2019. [17] Mahajan D, Girshick R, Ramanathan V, et al. Exploring the limits of weakly supervised pretraining[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 181-196. [18] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[C]//Advances in neural information processing systems. 2012: 1097-1105. [19] Iandola F N, Han S, Moskewicz M W, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size[J]. arXiv preprint arXiv:1602.07360, 2016. [20] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014. [21] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 779-788. [22] Ding X, Guo Y, Ding G, et al. Acnet: Strengthening the kernel skeletons for powerful cnn via asymmetric convolution blocks[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 1911-1920. [23] Han K, Wang Y, Tian Q, et al. GhostNet: More features from cheap operations[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 1580-1589.
# Inspired By Arturo Presa Calculus Level 5 What can we say about the function $$f: \mathbb{R}^+ \rightarrow \mathbb{R}$$ which satisfies $f(xy) = f(x) + f(y) ?$ × Problem Loading... Note Loading... Set Loading...
# Django tables2 list view does not support filtering on model properties Which kind of sucks.  If you want to filter a column, it has to either be an aggregate function of all rows, or it has to be stored in that table or in a path from that table.  It can’t be a function of that row.  Which kind of sucks, it means that any row level functional properties have to be maintained when you do a row save. And thus were spent 4 hours of my Sunday. You’re welcome. # What’s up with AWS Cognito and Django? I posted questions on StackOverflowand AWS Forum.  No answer.  I even texted people on LinkedIn who are machers at AWS.  Nada.  Not impressed!  Sad! # Refined photon question, posted to Stack Exchange, let’s see if it gets crushed or discarded Posted on Stack Exchange: Mark Andrew Smith’s PhD thesis from 1994 examines relativistic cellular automata models. Also a 1999 paper by Ostoma and Trushyck examines this topic. One topic not discussed is the information required in a cell to represent photons in transit. Suppose we have cells arrayed in a cube so that each cell has 26 neighbors. Suppose there are $N$ cells in the simulation. So it requires $\log{N}$ bits to represent a cell location. If a photon in motion is currently in a cell, it’s direction can be represented by the location of the farthest cell it will reach on it’s straight-line trajectory. Any cell can originate a photon and can receive photons passing through from any other cell. So each cell must be able to represent $N \log{N}$  bits of information, to represent all photons in transit from all possible sources. Question: Is there any schema that could represent the set of all photons passing through a cell using less information, with reasonable fidelity? Question: According to the Pauli Exclusion Principle, any number of photons can occupy a single point in space. In the limit (real physical space), does each point in space contain an infinite number of photons? This would require infinite bits to represent. Storage of infinite bits requires infinite energy.   If so, does this pose a challenge to the idea, expressed in Fredkin’s Digital Philosophy, that the universe is in fact a cellular automata, with the limiting speed of light simply coinciding with the “clock speed” of the automata, i.e. the rate at which photons can move from one cell to the next? # Correct reproduction of BDM Someone attempted to reproduce BDM, had problems and posted on CodeReview StackExchange asking for insight.  The dummies there criticized the white space and variable names in his code.  I found someone’s blog post with a correct answer and posted it.  Sanctimonious and clueless lifers on the site deleted the information.  The rules of StackExchange pretty much guarantee that narrow-minded lifers, similar to Wikipedia edit patrollers, will defend StackExchange against any useful content.  Oh well.  Here’s my answer: OP is trying to write a Python program to reproduce a claimed calculation result of Bueno De Mesquita (BDM). There is another attempt to reproduce this calculation, in Python, by David Masad, “Replicating a replication of BDM“. Masad provided Python code, and also showed an approximately 20% divergence in the median score, starting from the same example and same inputs and same references. Jeremy McKibben-Sanders then replicated the model, with results matching BDM. Masad added a new post to discuss the coding issues which led him awry. Reading those posts and their code and comparing with above code will lead to correct diagnosis for above code. # Little League baseball: Play to win or play for development From Kindgarten through 2nd grade, my son played in Little League baseball and all players were rotated through all positions during the game, and they discouraged keeping score.   Pitching was by machine. Come 3rd grade, things change:  The emphasis is now on winning. Players are selected for particular positions that they keep throughout the season.  One or two players are selected to pitch, and no others are trained in pitching.  The coach is a former minor league player with a focus on winning. If a kid can’t bat (unless it’s the lone girl on the team), he will signal them to walk or bunt.  My son hated it, and we just dropped out. There are two pressures on the coach:  One is parents who want to see their kid’s team win at all costs, whose kids are docile enough to accept any position.  The other (apparently a great minority) are parents like myself, who want to see their 9-year-olds having fun and learning to play all the positions in the game. # Cellular automata in physics and information quantity of a cell I was taking a look at the 1994 PhD thesis of Mark Andrew Smith on Cellular Automata Methods in Mathematical Physics.  I could only find one subsequent paper by Smith, on polymer simulation in 1999 with B. Ostrovsky.  I assume he is no longer active.  The only other work I found was some apparently self-published work by Canadian engineers in 1999, Tom Ostoma and Mike Trushyk.  Like Smith they didn’t publish anything after 1999.  It doesn’t seem to be an actively pursued field.   The only reason I could find for this lack of pursuit was a comment on the Math Stack Exchange website by Willie Wong stating that One of the reasons that it may be difficult to model Minkowski space based on cellular automata is that there are no “non-trivial” finite sub-groups of O(3,1), where non-trivial means that it doesn’t just reduce to just a finite sub group of O(3) via conjugation. So while cellular automata can be manifestly be homogeneous and isotropic (so admits a discrete O(3) symmetry), it becomes conceptually difficult to imagine some cellular automata capturing Lorentz symmetry. # Spring reading: The Sparrow (*spoilers*) Mary Doria Russell’s The Sparrow, like Michel Faber’s The Book of Strange New Things, is a novel which attempts to give a realistic vision of first contact with a self-aware, intelligent alien species.  This is also a novel by an author new to writing science fiction. In Faber’s case, because he comes from what is called the “literary community” (no Wikipedia definition available), and in Russell’s case, this being a first novel by an anthropologist who had only written academic articles previously. Russell’s aliens are, in a way, much less alien than Faber’s aliens.  Both have arms and legs, are bipedal, have a language, live in houses, and have some technology.  Faber’s aliens are otherwise as different from people as squid and hamsters.  Russell’s aliens are modelled after kangaroos and tigers. In both cases, Russell and Faber downplay and lower the drama and strangeness of first contact.  They both use the colonial analogy, like Marco Polo first meeting the Chinese or Spaniards first encountering Aztecs.  In this analogy, the presence of others is not so astonishing and is not the focus of awareness.  Rather simply the inability to recognize social cues and differences in status hierarchy.  For example imagine Meriwether Lewis running into the Kim Jong Un after walking over a hill.  The social expectations and assumptions will be quite different, and one party may behave with a level of haughtiness and indifference, despite the novelty and strangeness of the encounter, that catches the other party quite unaware. That, in essence, is the plot of The Sparrow.  Russell’s aliens are just not as alien as Faber’s because she makes a lot of assumptions that are Earth-normative:  The atmostphere and gravity of the alien planet are not discussed and one assumes identical to Earth’s.  There are two similar species.  One turns out to be domesticated herbivore prey.  The other turn out to be carnivore predators, who look similar as a result of evolutionarily adaptive aggressive mimicry.  The herbivores are like big cuddly kittens and have very dextrous hands and are very social and warm.  The carnivores have larger teeth and three-fingered, sharp claws, and are very hierarchical and cold.  They have different kinds of intelligence, but both species are intelligent and capable of change. The novel adopts challenge to religious faith as a theme but somewhat tiresomely overplays it.   Both the humans being social at rest in the exploration group and the humands giving each other a hard time in the Jesuit context are somewhat heavily and stereotypically written.   The construction of the dual species, and the ecological imbalance accidentally introduced by the visiting human party are cleverly designed, as one would expect of a good scientist exploring a scenario in their domain. I bought the sequel, Children of God, which will arrive in a few weeks.  The New York Times didn’t like it.  Russell plays out the scenario a little more with a return visit.  I’m looking forward to it!  My primary takeaways from this book: • First contact with aliens could play out just like first contact in the human context, for example when Christian missionaries came to Japan in the 1500’s. • We have to be very careful about unintended ecological impact of human ideas on alien society.  The predator/prey society depicted in Russell’s book had strict population controls and no risk of prey insurrection.   The human concept of gardening interfered with population controls and the human concept of strength in numbers and retaliation upset the political stability of the dual society. • The aliens might not like us, find us that interesting, and may look down on us, even if they are technologically inferior, so we have to be very careful about making assumptions about social hierarchies, status, and level of empathy.  (The essence of diplomacy, I suppose. • Maybe we can make asteroids into self-fueling space ships.  (Her one cool and relatively unexplored technical idea.) Finally, note the novel is written in 1998 and she has a relatively uneven scorecard as a futurologist.  She got a few things right, like tablet computing, but mostly her timeline is way too ambitious, considering it’s 2017 as I write the following conditions were supposed to hold in 2016: • Students do not yet become indentured servants to pay for college scholarships. • Japan is not the dominant economic, military and political power. • Asteroids are not yet so thoroughly routinely mined that you can go to a broker for a used one equipped with engines that has just the right shape. • Jesuits don’t commission space explorations.
Public Group # itoa() - Can the format be changed? This topic is 4953 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi experts, Code Sample: Any_Variable = 22; char Temp_Score[5]; itoa(Any_Variable, Temp_Score,10); Draw_String(Temp_Score,10,20,14); In the above code the Draw_String function displays the value as "22" in mode 13h. Is there any way to display the values in "00022" format i.e. preceding 0's making it 5 digits. The problem with the above code is that when at any time Any_Variable' turns 1 the function display the value as 12 i.e. the number 2 is not erased. Had the function used preceding 0', it would have displayed the value as 01, which is what is required. This is anyway possible using sprintf(), but it is slightly slower than itoa. Any help!! Thanks Pramod [Edited by - tppramod on May 15, 2005 8:53:00 AM] ##### Share on other sites The only way I could think of doing this is by doing something like the following: char nString[6] = "00000"; int nNumber = 22; itoa( nNumber, (nString+5) - 2, 10); Unfortunately the 2 obviously needs to be replaced by the number of digits in nNumber, so I thought up this *ugly* beast: itoa(number, nString+4-(((number/10 >0) > 1:0)+((number/100 >0) > 1:0)+((number/1000 >0) > 1:0)), 10); Which works but is, quite franky hideous and inelegant and someone will probably think up a much better method! ##### Share on other sites IIRC: sprintf(Temp_Score, "%05d", Any_Variable); ##### Share on other sites You can format this width sprintf function ##### Share on other sites Quote: Original post by tppramodThis is anyway possible using sprintf(), but it is slightly slower than itoa.Pramod ##### Share on other sites i avoiding sprintf() only because it is slower than itoa. But i really don't know what makes it slower. May be padding? i don't know. I trying out ScootA's method, which seems to be working.... expecting some better method.... thanks pramod ##### Share on other sites Quote: Original post by tppramodi avoiding sprintf() only because it is slower than itoa. Have you tried sprintf? Is the speed difference really affecting your program that much? If so, how often are you calling this? You may be able to streamline your code, and offset the speed cost of sprintf. ##### Share on other sites char *PadInt(int num){ static char MyNumber[6] = {0,0,0,0,0,0}; //Enough to hold 5 digits + NULL int NumZeros=0; memset(MyNumber,'0',5); //Set them all to # 0 if (num>=10000) //No padding required else if (num>=1000) NumZeros=1; else if (num>=100) NumZeros=2; else if (num>=10) NumZeros=3; else NumZeros=4; itoa(num,MyNumber+NumZeros,10); return MyNumber;}` Now you can do this in your loop :) ##### Share on other sites I would not be surprised if sprintf was actually faster than the above hacks. ##### Share on other sites Quote: sprintf My weekly - "don't use sprintf, use snprintf" comment. Snprintf does everything sprintf does, plus gives length-safety. Have a look at Exceptional C++ style by Herb Sutter for more details. Jim. 1. 1 2. 2 3. 3 Rutin 15 4. 4 khawk 14 5. 5 frob 12 • 9 • 11 • 11 • 23 • 12 • ### Forum Statistics • Total Topics 633661 • Total Posts 3013226 ×
# General structure In this document, all paths are relative to the working directory of the server (with current data, world/map/) The script settings are initialized by the following line in conf/map_athena.conf, which is read by map_config_read(cfgName) import: npc/scripts.conf npc/scripts.conf is a manually-maintained file that loads functions, and then defers to the automatically-generated npc/_import.txt for map-local things. It is also used to enable holiday quests. Besides "import:", the only other relevant option is "npc:", which schedules an script file to load (though it may not necessarily contain an npc). There's also "delnpc:", which removes it from the list of things to load. As a special exception, "npc: clear" or "delnpc: all" will empty the list. You should not need "npc: clear" or any form of "delnpc:" In addition, item scripts (equip and use) are be loaded from db/itemdb.txt, but this is only a script-body, not a script file. equip scripts are evaluated every time the players stats need to be recalculated (which happens a lot); they must not modify the player's equipment. ## script files The function that actually loads the list of script files is do_init_npc(). Each line in a file must have less than 1020 bytes. The first thing this does for each files is skip lines that consist only of comments (// at the the very beginning of the line), compress runs of spaces to a single space (I think), and normalizes the runs of tabs-and-pipes into a single tab (for compatibility purposes). That's probably more details than a "Basics" page should have. Then, it tries to match either 3 or 4 words separated by tabs (in current data, normalized from pipes) or separated by spaces (huh?). It checks whether the first word not one of "-" or "function", and scans for a mapname terminated by a comma. If the map is not on the server, or the mapname is too long, it skips to the next line. This will probably cause bad things to happen. The second word must be one of "warp", "shop", "script", "monster", or "mapflag". There is also "duplicate..." but that shouldn't be used. All of these require 4 words except "mapflag", which may have only 3 in some cases. If the second word is "script" and the first word is "function", it is a function; otherwise it is the second word. ### warp npc_parse_warp(w1, w2, w3, w4) The first word is mapname, x, y. Spaces are optional after the commas. The mapname can contain anything but a comma. The fourth word is xs, ys,mapname, to_x, to_y. There must not be a space before mapname, or it will be considered part of it. w3 is the name of the warp. ### shop npc_parse_shop(w1, w2, w3, w4) w1 is mapname, x, y, dir. w4 is repeated ,<id>:<value> or ,<name>:<value> for the items in the shop. w3 is the name of the shop NPC. ### monster npc_parse_mob(w1, w2, w3, w4) w1 is mapname,x,y,xs,ys where xs and ys are optional and default to 0. This represents the spawn area. w4 is mobclass, num, delay1, delay2,eventname where the delays and eventname are optional and default to 0 and "". num times, a new mob spawner is created. w3 is the name; "--en--" or "--ja--" means to copy the appropriate name from the mob db. (A mob spawner is just a mob that automatically respawns after it is killed, after a certain delay) ### mapflag npc_parse_mapflag(w1, w2, w3, w4) w1 is mapname w3 is one of: nosave nomemo noteleport nowarp nowarpto noreturn monster_noteleport nobranch nopenalty pvp pvp_noparty pvp_noguild pvp_nightmaredrop pvp_nocalcrank gvg gvg_noparty nozenypenalty notrade noskill nopvp noicewall snow fog sakura leaves rain no_player_drops town Many of those don't necessarily work. if w3 is nosave, w4 is "SavePoint" or mapname,x,y if w3 is pvp_nightmaredrop, w4 is arg1, arg2, per; arg1 may be "random" or an item id; arg2 may be "inventory", "equip", or "all". ### script npc_parse_script(w1, w2, w3, w4, <...>) w1 is "-" or mapname, x, y, dir w4 must have a comma. Then a script body is parsed from { to }. See below. If the npc is on a map, w4 is class,xs,ys or just class. If class is negative but the NPC is on a map, the first line of the body may be used as an event called exname. w3 is name or name::exname. If exname is not provided, it is the same as name. The parsed script body is scanned for labels starting with "On" (not case sensitive), and each of these is made available as an event called exname::labelname Then it looks for labels of the form OnTime<ticks>, case sensitive, and these are registered as timer events. ### function npc_parse_function(w1, w2, w3, w4, <...>) A script body is parsed from { to }. Then the script is registered as a function (for use with callfunc) with the name w3, which currently has a maximum of 49 characters. # Script Body To avoid too much header depth, this is at the top level. Of course, it is also where you spend most of your time. A script body start with a { and goes to the first }. It may occur in a script or function, an item use or equip script, or in the magic file. The function that handles this is parse_script(), and it returns some bytecode. Inside a script body, there is label: line; or just line; line is empty or command args. Commands are checked for number of arguments. args is 0 to 128 expressions. There is a warning if args are not separated by commas, except after the condition expression of the if command. Note that if statements *are* properly nested, so this can be used for short-circuiting since the && and || operators don't do it. An expression is a subexpression with -1 precedence A subexpression can be a bare - (i.e, followed by a comma or semicolon after skipping spaces), which is interpreted as a label meaning the next line. A subexpression can be a -, !, or ~ followed by an expression with 100 precedence. This does what you expect, except for function calls. A subexpression can be a simple expression, followed by an operator with the following precedences: 6 + - 7 * / % 8 ( <function call> 1 && 5 & >> << 0 || 4 | 3 ^ 2 == != >= > <= < If it is a function call, the left simple expression must be a function, and 0 to 128 arguments are parsed followed by a ). Functions are checked for number of parameters. If it is not a function call, another subexpression is parsed with the given priority. A simple expression may be a parenthesized subexpression with -1 precedence. A simple expression may be an integer, with optional sign. Integers may be decimal, octal, or hexadecimal. A simple expression may be a string literal, delimited by "". There's something funky with backslashes. A string must not contain an embedded newline. Otherwise, a string is a name, which may be a function, a label, or a variable. If a name is not a function and is followed by a [, it is translated into a call to the getelementofarray function (which returns a reference). The subexpression has precedence -1 and must be followed by a ]. Note that there is much magic to make labels work properly before they're declared. # Variables A variable is not a builtin function or label, and matches the following: An optional $, an option @ or l, two optional # (in which case there should be no$ or @), a sequence of alnum or _, and an optional $. The meanings are as follows: {| class="wikitable" style="width:650px;" border="1" |- ! Prefix !! Suffix !! meaning !! functions |- | || || Any name found in "db/const.txt" of type 0 is flattened at script compile time. || N/A |- | || || Any name found in db/const.txt of type 1 is a special player paramter, and special logic is used to get/set it. || pc_readparam, pc_setparam |- | || || A permanent variable attached to the character object, stored by the char-server in "save/athena.txt" || pc_readglobalreg, pc_setglobalreg |- | @ or l || || A temporary variable attached to a player. Reset when the player logs out or when the server restarts. Don't start variable names with l, this will be removed! || pc_readreg, pc_setreg |- | @ or l ||$ || A temporary variable attached to a player. Reset when the player logs out or when the server restarts. Don't start variable names with l, this will be removed! || pc_readregstr, pc_setregstr |- | $|| || A global permanent variable, stored by the map-server in "save/mapreg.txt". || mapreg_db search, mapreg_setreg |- |$ || $|| A global permanent variable, stored by the map-server in "save/mapreg.txt". || mapregstr_db search, mapreg_setregstr |- |$@ || || A global temporary variable. This is important for scripts which are called with no RID attached. || mapreg_db search, mapreg_setreg |- | $@ ||$ || A global temporary variable. This is important for scripts which are called with no RID attached. || mapregstr_db search, mapreg_setregstr |- | # || || A permanent account-based variable, stored by the char-server in "save/accreg.txt". || pc_readaccountreg, pc_setaccountreg |- | ## || || A permanent account-based variable, stored by the login-server in "save/account.txt". These used to be broken, but should work now. You shouldn't use these until we get implement worlds on the main server. || pc_readaccountreg2, pc_setaccountreg2 |} The $suffix indicates a string, and is only allowed for player temporary, global persistent, and global temporary variables. == Arrays == Variables of player temporary, global persistent, or global temporary scope may be arrays. @array[0] is equivalent to @array, but you should always include the [0] if you use an array. Array indices go from 0 to 127. =Old Introduction= This page is made after Fredzilla's "Athena Script Commands, A reference manual for the eAthena scripting language" originally located at: http://buwinow5.tripod.com/ . More recent versions can be found at: http://eathena-project.googlecode.com/svn/trunk/doc/script_commands.txt , and http://eathena.ws/board/ but they are unlikely to be compatible with actual TMW versions of the server This document is a reference manual for all the scripting commands and functions available in current eAthena SVN. It is not a simple tutorial. When people tell you to "Read The F***ing Manual", they mean this. The information was mostly acquired through looking up how things actually work in the source code of the server, which was written by many people over time, and lots of them don't speak English and never left any notes - or are otherwise not available for comments. As such, anything written in here might not be correct, it is only correct to the best of our knowledge, which is limited. This document is poorly structured and rather messy in general. In fact, further cleaning up and reordering this document is probably pointless, due to upcoming switch to Lua scripting language, which will rid us of most of the problems mentioned herein and make a new manual necessary. But while we have this one, we should make the most of it, and it might be helpful in making sure the new Lua engine can actually do everything useful that the old engine could. This is not a place to teach you basic programming. This document will not teach you basic programming by itself. It's more of a reference for those who have at least a vague idea of what they want to do and want to know what tools they have available to do it. We've tried to keep it as simple as feasible, but if you don't understand it, getting a clear book on programming in general will help better than yelling around the forum for help. A little learning never caused anyone's head to explode. =Structure= The commands and functions are listed in no particular order: *Name of the command and how to call it. Descriptive text Small example if possible. Will usually be incomplete, it's there just to give you an idea of how it works in practice. To find a specific command, use Ctrl+F, (or whatever keys call up a search function in whatever you're reading this with) put an * followed by the command name, and it should find the command description for you. If you find anything omitted, please respond. :) =Syntax= Throughout this document, wherever a command wants an argument, it is given in <angle brackets>. This doesn't mean you should type the angle brackets. :) If an argument of a command is optional, it is given in {curly brackets}. You've doubtlessly seen this convention somewhere, if you didn't, get used to it, that's how big boys do it. If a command can optionally take an unspecified number of arguments, you'll see a list like this: command <argument>{,<argument>...<argument>} This still means they will want to be separated by commas. Where a command wants a string, it will be given in "quotes", if it's a number, it will be given without them. Normally, you can put an expression, like a bunch of functions or operators returning a value, in (round brackets) instead of most numbers. Round brackets will not always be required, but they're often a good idea. Wherever you refer to a map name, it's always '''mapname.gat''' or '''mapname.afm''' if you are using AFM maps, (if you don't know what they are, you aren't using them) and not just ''"mapname"''. While some commands do know that if you didn't give ''".gat"'', it should add it, it's pretty tricky to tell which ones they are. =Script loading structure= Scripts are loaded by the map server as referenced in the 'conf/map_athena.conf' configuration file, but in the default configuration, it doesn't load any script files itself. Instead, it loads the file '''npc/scripts_main.conf''' which itself contains references to other files. The actual scripts are loaded from txt files, which are linked up like this: npc: <path to a filename> Any line like this, invoked, ultimately, by '''map_athena.conf''' will load up the script contained in this file, which will make the script available. No file will get loaded twice, to prevent possible errors. Another configuration file option of relevance is: delnpc: <path to a filename> This will unload a specifiled script filename from memory, which, while seemingly useless, may sometimes be required. Whenever '''//''' is encountered in a line upon reading, everything beyond this on that line is considered to be a '''comment''' and is ignored. This works wherever you place it. Upon loading all the files, the server will execute all the '''top-level command'''s in them. No variables exist yet at this point, no commands can be called other than those given in this section. These commands set up the basic server script structure - create NPC objects, spawn monster objects, set map flags, etc. No code is actually executed at this point except them. The top-level commands the scripting are pretty confusing, since they aren't structured like you would expect commands, command name first, but rather, normally start with a map name. * The confusing '''tab symbols''' in the '''top-level commands''', used to divide their arguments have been replaced by the symbol "'''|'''" ==Top Level Commands== Here is a list of valid top-level commands: ===Set a map flag:=== <map name>|mapflag|<flag> This will, upon loading, set a specified map flag on a map you like. These are normally in files inside 'conf/mapflag' and are loaded first, so by the time the server's up, all the maps have the flags they should have. Map flags determine the behavior of the map regarding various common problems, for a better explanation, see '[[setmapflag]]'. ===Create a permanent monster spawn:=== <map name>,<x1>,<y1>,<x2>,<y2>|monster|<monster name>{,<level>}|<mob id>,<amount>,<delay1>,<delay2>,<event name> '''Map name''' is the name of the map the monsters will spawn on. x1/y1-y1/y2 is a square of map coordinates which will limit where they will initially spawn. Putting zeros instead of these coordinates will spawn the monsters randomly. It's not certain whether monsters will later be able to venture out of this square when randomly moving or not. (Can anyone confirm?) '''Monster name''' is the name the monsters will have on screen, and has no relation whatsoever to their names anywhere else. It's the mob id that counts, which identifies monster record in 'mob_db.txt' database of monsters. If the mob name is given as "--ja--", the 'japanese name' field from the monster database is used, (which, in eAthena, actually contains an english name) if it's "--en--", it's the 'english name' from the monster database (which contains an uppercase name used to summon the monster with a GM command). If you add 4000 to the monster ID, the monster will be spawned in a 'big version', (monster size class will increase) and if you add 2000, the 'tiny version' of the monster will be created. This will not, however, make the monster spawn with a bigger or smaller sprite, like with @monstersmall/@monsterbig GM commands. Monster size class relates only to the damage calculation. '''Amount''' is the amount of monsters that will be spawned when this command is executed, it is affected by spawn rates in 'battle_athena.conf'. '''Delay1''' and '''delay2''' are the monster respawn delays - the first one counts the time since a monster defined in this spawn was last respawned and the second one counts the time since the monster of this spawn was last killed. Whichever turns out to be higher will be used. If the resulting number is smaller than a random value between 5 and 10 seconds, this value will be used instead. (Which is normally the case if both delay values are zero.) If both delay values are -1, the monster will never respawn upon death until the server restarts. The times are given in 1/1000ths of a second. '''Level''' overrides the monster's level from the monster id database, if it is 0, the level from the database is used. '''Event name''' is an event label that will be triggered every time a monster of that spawn is killed. If you do not wish to define such an event, put '0' there. For a full description, of how monster kill events work, see the 'monster' command. ===Define a warp point=== <from map name>,<fromX>,<fromY>,<facing>|warp|<warp name>|<spanx>,<spany>,<to map name>,<toX>,<toY> This will define a warp NPC that will warp a player between maps, and while most arguments of that are obvious, some deserve special mention. '''SpanX''' and '''SpanY''' will make the warp sensitive to a character who didn't step directly on it, but walked into a zone which is centered on the warp from coordinates and is SpanX in each direction across the X axis and SpanY in each direction across the Y axis. '''Warp''' NPC objects also have a name, because you can use it to refer to them later with 'enablenpc'/'disablenpc' '''Facing''' of a warp object is irrelevant, it is not used in the code and all current scripts have a zero in there. ===Define an NPC object:=== <map name>,<x>,<y>,<facing>|script|<NPC Name>|<sprite id>,{<code>} <map name>,<x>,<y>,<facing>|script|<NPC Name>|<sprite id>,<triggerX>,<triggerY>,{<code>} This will place an NPC object on a specified map at the specified location, and is a top-level command you will use the most in your custom scripting. The NPCs are triggered by clicking on them, and/or by walking in their trigger area, if defined, see that below. '''Facing''' is a direction the NPC sprite will face in. Not all NPC sprites have different images depending on the direction you look from, so for some facing will be meaningless. Facings are counted counterclockwise in increments of 45 degrees, where 0 means facing towards the top of the map. (So to turn the sprite towards the bottom of the map, you use facing 4, and to make it look southeast it's facing 5.) '''Sprite''' id is the sprite number used to display this particular NPC. For a full list of sprite id numbers see http://kalen.s79.xrea.com/npc/npce.shtml You may also use a monster's ID number instead to display a monster sprite for this NPC. It is possible to use a job sprite as well, but you must first define it as a monster sprite in 'mob_avail.txt', a full description on how to do this is for another manual. A '-1' sprite id will make the NPC invisible (and unclickable). A '111' sprite id will make an NPC which does not have a sprite, but is still clickable, which is useful if you want to make a clickable object of the 3D terrain. '''TriggerX''' and '''triggerY''', if given, will define an area, centered on NPC and spanning triggerX cells in every direction across X and triggerY in every direction across Y. Walking into that area will trigger the NPC. If no 'OnTouch:' special label is present in the NPC code, the execution will start from the beginning of the script, otherwise, it will start from the 'OnTouch:' label. NPC name is kinda special, because it's not only the name of NPC you will see on screen. It's formatted this way: <Screen name>{#<Extra name identifier>}{::<Label name>} The extra identifier is there that you can make an npc with an invisible name (just omit the screen name, but keep the identifier name) and so that you can refer to several NPCs which have the same name on screen, which is useful to make an NPC that relocates depending on special conditions, for example - you define several NPC objects and hide all except one. ('Hunter#hunter1','Hunter#hunter2'...) The extra name identifiers will let your code tell them apart. Label name is used to duplicate NPC objects (more on that below). The complete NPC name (Screen name + extra identifier) may not exceed 24 characters. The label name is counted separately but also limited to 24 characters. The code part is the script code that will execute whenever the NPC is triggered. It may contain commands and function calls, descriptions of which compose most of this document. It has to be in curly brackets, unlike elsewhere where we use curly brackets, these do NOT signify an optional parameter. ===Define an NPC duplicate:=== <map name>,<x>,<y>,<facing>|duplicate(<NPC label>)|<sprite id> <map name>,<x>,<y>,<facing>|duplicate(<NPC label>)|<sprite id>,<triggerX>,<triggerY> This will duplicate an NPC referred to by the label. The duplicate runs the same code as the NPC it refers to, but may have different location, facing and sprite ID. Whether it may actually have it's own size of trigger area is unclear at the moment - if you need that, try it and tell us of the results. ===Define a 'floating' NPC object=== -|script|-1,{<code>} This will define an NPC object not triggerable by normal means. This would normally mean it's pointless since it can't do anything, but there are exceptions, mostly related to running scripts at specified time, which is what these floating NPC objects are for. More on that below. ===Define a shop NPC=== <map name>,<x>,<y>,<facing>|shop|<NPC Name>|<sprite id>,<itemid>:<price>{,<itemid>:<price>...} This will define a shop NPC, which, when triggered (which can only be done by clicking) will cause a shop window to come up. No code whatsoever runs in shop NPCs and you can't change the prices otherwise than by editing the script itself. (No variables even exist at this point of scripting, so don't even bother trying to use them.) The item id is the number of item in the 'item_db.txt' database. If Price is set to -1, the 'buy price' given in the item database will be used. Otherwise, the price you gave will be used for this item, which is how you create differing prices for items in different shops. ===Define a function object=== function|script|<function name>|{ <code> } This will define a function object, callable with the [['callfunc']] command [[(see below)]]. This object will load on every map server separately, so you can get at it from anywhere. It's not possible to call the code in this object by anything other than the 'callfunc' script command. The code part is the script code that will execute whenever the function is called with 'callfunc'. It has to be in curly brackets, unlike elsewhere where we use curly brackets, these do NOT signify an optional parameter. ===Alter a map cell=== ! <map name>|setcell|<type>,<x1>,<y1>,<x2>,<y2> This is sneaky, and isn't used in any official scripts, but it will let you define an area (x1/y1-x2/y2 square) of a map as having cell type 'type', where type is a number, which, among other things, defines whether the area is walkable or not, whether it has Basilica working in it or not, and some other things. This is a solution just itching for a problem and there's a number of interesting things you could use it for. Further investigation on what types are valid and mean what exactly is pending. ==Script Body== =What a RID is and why do you need to know= Most scripting commands and functions will want to request data about a character, store variables referenced to that character, send stuff to the client connected to that specific character. Whenever a script is invoked by a character, it is passed a so-called '''RID''' - this is the account ID number of a character that caused the code to execute by clicking on it, walking into it's OnTouch zone, or otherwise. If you are only writing common NPCs, you don't need to bother with it. However, if you use functions, if you use timers, if you use clock-based script activation, you need to be aware of all cases when a script execution can be triggered without a RID attached. This will make a lot of commands and functions unusable, since they want data from a specific character, want to send stuff to a specific client, want to store variables specific to that character, and they would not know what character to work on if there's no RID. Unless you use [[attachrid]] to explicitly attach a character to the script first. Whenever we say ''invoking character'', we mean ''the character who's RID is attached to the running script''. But what about GID? GID stands for the Game ID of something, this can either be the GID obtained through mobspawn (mob control commands) or the account ID of a character. In the [http://manaplus.evolonline.org '''manaplus'''] client, you can select a player, NPC or Mob (with Q, A, or N key for example) and press F10, then select the target tab =Item and pet scripts= Each item in the item database has two special fields - EquipScript and UseScript. The first is script code run every time a character equips the item, with the RID of the equipping character. Every time they unequip an item, all temporary bonuses given by the script commands are cleared, and all the scripts are executed once again to rebuild them. This also happens in several other situations (like upon login) but the full list is currently unknown. UseScript is a piece of script code run whenever the item is used by a character by doubleclicking on it. Not all script commands work properly in the item scripts. Where commands and functions are known to be meant specifically for use in item scripts, they are described as such. Every pet in the pet database has a PetScript field, which determines pet behavior. It is invoked wherever a pet of the specified type is spawned. (hatched from an egg, or loaded from the char server when a character who had that pet following them connects) This may occur in some other situations as well. Don't expect anything other than commands definitely marked as usable in pet scripts to work in there reliably. =Numbers= Beside the common decimal numbers, which are nothing special whatsoever (though do not expect to use fractions, since ALL numbers are integer in this language), the script engine also handles hexadecimal numbers, which are otherwise identical. Writing a number like '0x<hex digits>' will make it recognised as a hexadecimal value. Notice that 0x10 is equal to 16. Also notice that if you try to 'mes 0x10' it will print '16'. This is not used much, but it pays to know about it. =Variables and scope= The meat of every programming language is variables - places where you store data. Variables are divided into global (not attached to any specific RID, and independent of whoever triggered the object) and local (attached to a specific character object or a specific account object). They are further divided into permanent (they come back when the server resets) and temporary (they only persist until the server dies). This is what's called variable scope. :) Unlike in more advanced languages, all temporary variables are essentially 'global', but not in the sense described above - if one NPC sets a temporary variable, even if it is character based, if that character triggers another NPC object, the variable will still be there, so you should be careful and set the variables you mean to be temporary to something sensible before using them. It also pays to keep variable names descriptive and reasonably long. In the eAthena scripting language, ''variable names are not case sensitive''. Variables are divided into and uniquely identified by the combination of: {| |- |''prefix'' || determines the scope and extent (or lifetime) of the variable |- |''name'' || an identifier consisting of '_' and alphanumeric characters |- |''postfix'' || determines the '''type''' of the variable: integer or string |} '''Scope''' can be: {| |- |''global'' || global to all servers '''(to be verified)''' |- |''local'' || local to the server '''(to be verified)''' |- |''account'' || attached to the account of the character identified by RID |- |''character'' || attached to the character identified by RID |- |''npc'' || attached to the NPC |- |''scope'' || attached to the scope of the instance |} '''Extent''' can be: {| |- |''permanent'' || They still exist when the server resets. |- |''temporary'' || They cease to exist when the server resets. |} {| class="wikitable" style="width:650px;" border="1" |+ Prefix: Variable scope |- | " " ||Thats right, nothing before a variable, this a permanent variable attached to the character object. |- | "'''@'''" || A temporary version of a character-based variable. SVN versions before 2094 revision and RC5 version will also treat 'l' as a temporary variable prefix, so beware of having variable names starting with 'l' (lowercase L), they will also be considered temporary, even if you didn't mean them to be!'''(to be verified)''' |- | "'''$" || A global permanent variable. They are stored in "save\mapreg.txt" file and are the only kind of variables stored in a text file in the SQL version. |- | "$@'''" || A global temporary variable. This is important for scripts which are called with no RID attached, that is, not triggered by a specific character object. |- | "'''#'''" || A permanent account-based variable. They are stored with all the account data in "save\accreg.txt" in TXT versions and in the SQL versions in the 'global_reg_value' table. |} There's also a ''''##'''' variable prefix, which denotes some kind of account-based variable, (it gets sent to the char server for storage too) but it is not certain just what makes it different from a regular '#' variable and whether it works completely at all. There is no such thing as a temporary account-based variable. '''(to be verified)''' {| class="wikitable" style="width:650px;" border="1" |- |"##" || A permanent global account variable stored by the login server. They are stored in "save\account.txt" and in the SQL versions in the 'global_reg_value' table, using type 1. The only difference you will note from normal # variables is when you have multiple char-servers connected to the same login server. The # variables are unique to each char-server, while the ## variables are shared by all these char-servers. |} All of the above variables store numbers. They can store positive and negative numbers, but only whole numbers ('''so don't expect to do any fractional math'''). You can also store a string in a variable, but this means naming it specially to denote it contains text rather than a number: {| class="wikitable" style="width:650px;" border="1" |+ Postfix: integer or string |- | nothing || integer variable, can store positive and negative numbers, but only whole numbers (so don't expect to do any fractional math) |- | "'''$ " || string variable, can store text |} Examples: name permanent character integer variable name$|| permanent character string variable |- | @name || temporary character integer variable |- | @name$ temporary character string variable $name || permanent global integer variable |- |$name$|| permanent global string variable |- |$@name temporary global integer variable $@name$ temporary global string variable .name NPC integer variable .name$|| NPC string variable |- |.@name || scope integer variable |- |.@name$ scope string variable #name permanent local account integer variable #name$|| permanent local account string variable |- |##name || permanent global account integer variable |- |##name$ permanent global account string variable Some variables are special, that is, they are already defined for you by the scripting engine. You can see the full list somewhere in 'db/const.txt', which is a file you should read, since it also allows you to replace lots of numbered arguments for many commands with easier to read text. The special variables most commonly used are all permanent character-based variables: StatusPoint Amount of status points remaining. BaseLevel Current base level SkillPoint Amount of skill points remaining Class Current job Upper 1 if the character is an advanced job class. Zeny Current amount of zeny Sex Character's gender, 0 if female, 1 if male. Weight The weight the character currently carries. MaxWeight The maximum weight the character can carry. JobLevel Character's job level BaseExp The amount of base experience points the character has.Notice that it's zero (or close) if the character just got a level. JobExp Same for job levels NextBaseExp Amount of experience points needed to reach the next base level. NextJobExp Same for job levels. Hp Current amount of hit points. MaxHp Maximum amount of hit points. Sp Current spell points. MaxSp Maximum amount of spell points. BaseJob This is sneaky, apparently meant for baby class support.This will supposedly equal Job_Acolyte regardless of whether the character is an acolyte or a baby acolyte, for example. Karma The character's karma. Karma system is not fully functional, but this doesn't mean this doesn't work at all. Not tested. Manner The character's manner rating. Becomes negative if the player utters words forbidden through the use of 'manner.txt' client-side file. While these behave as variables, do not always expect to just set them - it is not certain whether this will work for all of them. Whenever there is a command or a function to set something, it's usually preferable to use that instead. The notable exception is Zeny, which you can and often will address directly -setting it will make the character own this number of Zeny. If you try to set Zeny to a negative number, the script will be terminated with an error. • If a variable was never set, it is considered to equal zero (for number variables) or an empty string ("", nothing between the quotes) for string variables. Once you set it to that, the variable is as good as forgotten forever, and no trace remains of it even if it was stored with character or account data. (to be verified) # Arrays Arrays (in eAthena at least) are essentially a set of variables going under the same name. You can tell between the specific variables of an array with an 'array index', a number of a variable in that array: <variable name>[<array index>] Variables stored in this way, inside an array, are also called 'array elements'. Arrays are specifically useful for storing a set of similar data (like several item IDs for example) and then looping through it. You can address any array variable as if it was a normal variable: set @arrayofnumbers[0],1; You can also do sneaky things like using a variable (or an expression, or even a value from an another array) to get at an array value: set @x,100; set @arrayofnumbers[@x],10; This will make @arrayofnumbers[100] equal to 10. Notice that index numbering always starts with 0. Arrays cannot hold more than 128 variables. (So the last one can't have a number higher than 127) And array indices probably can't be negative. Nobody tested what happens when you try to get a negatively numbered variable from an array, but it's not going to be pretty. :) Arrays can, naturaly, store strings: @menulines$[0] is the 0th element of the @menulines$ array of strings. Notice the '$', normally denoting a string variable, before the square brackets that denotes an array index. =Variable type availability= There are some important restrictions on which kinds of variables, depending on their storage location and type, will actually work: {| class="wikitable" border="1" |+ Variable type availability |- ! VarType !! Norm !! Array |- | '''$Str$''' || OK! || OK! |- | '''$@Str$''' || OK! || OK! |- | '''@Str$ || OK! || OK! |- | #Str$''' || FAIL! || FAIL! |- | '''Str$ || FAIL! || FAIL! |- | $Int''' || OK! || OK! |- | '''$@Int || OK! || OK! |- | @Int || OK! || OK! |- | #Int || OK! || FAIL! |- | Int || OK! || FAIL! |- |} In short, this means two important things: 1. Account-based and character-based variables cannot form arrays, regardless of whether they are string or integer. While the script engine will allow you to define such variables, they will not actually be stored as arrays as you expect, which can lead to hard-to-debug errors. 2. Account-based and character-based variables may not store strings. Which is a real pain. # Special variables Only those special variables not related directly to specific script commands are listed here. For a list of those others, see 'getmapxy', 'getinventorylist', 'menu', 'select', 'warpwaitingpc'. PC_DIE_COUNTER this permanent character-based variable is automatically incremented every time that character dies. jobchange_level this permanent character-based variable is automatically set to the job level the character had before the job change, regardless of whether the job change was performed through the script command or the GM command. CLONE_SKILL this permanent character-based variable stores the ID of the skill that has been copied with the Plagiarism Rogue skill, if any such skill has been copied. # Operators Operators are things you can do to variables and numbers. They are either the common mathematical operations or conditional operators + will add two numbers. If you try to add two strings, the result will be a string glued together at the +. You can add a number to a string, and the result will be a string. No other math operators work with strings. - will subtract two numbers. * will multiply two numbers. / will divide two numbers. Note that this is an integer division rounding down, i.e. 7/2 is not equal 3.5, it's equal 3. (quotient) % will give you the remainder of the division. 7%2 is equal to 1. There are also conditional operators. This has to do with the conditional command 'if' and they are meant to return either 1 if the condition is satisfied and 0 if it isn't. (That's what they call 'boolean' variables. 0 means 'False'. Anything except the zero is 'True' Odd as it is, -1 and -5 and anything below zero will also be True.) You can compare numbers to each other and you compare strings to each other, but you can not compare numbers to strings. == Is true if both sides are equal. For strings, it means they are the same. >= True if the first value is equal to, or greater than, the second value. <= True if the first value is equal to, or less than, the second value > True if the first value greater than the second value < True if the first value is less than the second value != True if the first value IS NOT equal to the second one Examples: 1==1 is True. 1<2 is True while 1>2 is False. @x>2 is True if @x is equal to 3. But it isn't true if @x is 2. Only '==' and '!=' have been tested for comparing strings. Since there's no way to code a seriously complex data structure in this language, trying to sort strings by alphabet would be pointless anyway. Comparisons can be stacked in the same condition: && - Is True if and only if BOTH sides are true. || Is True if either side of this expression is True. 1=1 && 2=2 is True. 1=1 && 2=1 is False. 1=1 || 2=1 is True. Binary logical operators work only on numbers: << Left shift. >> Right shift. & And. Or. ^ Xor. If you don't know what these five mean, don't bother, you don't need them. Whether '!' works as a binary not operator for numbers has not been tested. # Labels Within executable script code, some lines can be labels: <label name>: Labels are points of reference in your script, which can be used to route execution with 'goto', 'menu' and 'jump_zero' commands, invoked with 'doevent' and 'donpcevent' commands and are otherwise essential. A label's name may not be longer than 22 characters. (23rd is the ':'.) There is some confusion in the source about whether it's 22, 23 or 24 all over the place, so keeping labels under 22 characters could be wise. In addition to labels you name yourself, there are also some special labels which the script engine will start execution from in all scripts it finds them in if a special event happens: OnClock<hour><minute>: OnHour<hour>: On<weekday><hour><minute>: OnDay<month><day>: This will execute when the server clock hits the specified date or time. Hours and minutes are given in military time. ('0105' will mean 01:05 AM). Weekdays are Sun, Mon, Tue, Wed, Thu, Fri, Sat. Months are 01 to 12, days are 01 to 31. (Remember the zero. :) OnInit: OnInterIfInit: OnInterIfInitOnce: OnInit will execute every time the scripts loading is complete, including when they are reloaded with @reloadscript command. OnInterIfInit will execute when the map server connects to a char server, OnInterIfInitOnce will only execute once and will not execute if the map server reconnects to the char server later. OnAgitStart: OnAgitEnd: OnAgitInit: OnAgitEliminate: OnAgitBreak: OnAgitStart will run whenever the server shifts into WoE mode, whether it is done with @agitstart GM command or with 'AgitStart' script command. OnAgitEnd will do likewise for the end of WoE. OnAgitInit will run when castle data is loaded from the char-server by the map server. (Notice that it won't run when you @reloadscript.) OnAgitBreak runs in all NPCs of a map when an Emperium is destroyed. While it is explicitly defined as an event to run when it breaks whenever an Emperium is spawned, it has some builtin code support for it, so it's not certain whether you can have that event named anything else. OnAgitEliminate is similar in that respect, and it runs when an Emperium is destroyed in a castle that is currently not owned by a guild. No RID will be attached while any of the abovementioned labels are triggered, so no character or account-based variables will be accessible, until you attach a RID with 'attachrid' (see below). OnTouch: This label will be executed if a trigger area is defined for the NPC object it's in. If it isn't present, the execution will start from the beginning of the NPC code. The RID of the triggering character object will be attached. OnPCDieEvent: OnPCKillEvent: OnPCLogoutEvent:
Interuniversity Attraction Poles Phase V 2002 - 2006 "Dynamical Systems and Control: Computation, Identification and Modelling" back to homepage Study Day Study Day of the Interuniversity Attraction Poles (IAP) V/22 Tuesday 26 November 2002 Hôtel Mercure - Avenue de Lauzelle 61, 1348 Louvain-la-Neuve --------------------------------------------------------------- PROGRAMME 09h30          Plenary Lecture I “Similarity in graphs. Application to web searching and to automatic synonym extraction” by Professor Vincent Blondel 10h30          A Tribute to Frank Callier 10h45          Poster Session I - Newcomers within the 7 teams of the IAP Network 12h00          Meeting of the IAP V/22 promoters and representatives of O.S.T.C. 12h30          Lunch 14h00          Poster Session II – Traditional contributed poster session 15h30          Coffee break 16h00          Plenary Lecture II “Coordinated control of multi-agent systems” by Professor Luc Moreau 17h00             Closing. Plenary Lecture I Similarity in graphs: Application to web searching and to automatic synonym extraction Professor Vincent Blondel UCL-INMA Abstract We introduce a new concept of similarity between nodes in graphs and describe applications of this concept to the automatic extraction of synonyms in a dictionary and to web searching. In the graph of the web, every webpage is a node and there is a directed edge from node A to node B if page A points to page B. Some of today's most efficient search engines exploit this graph structure by looking for the most “important” nodes in a sub-graph constructed from the given query. Different methods exist for identifying important nodes in a graph. One such recent method proposed by Kleinberg assigns a hub score and an authority score to every node. Pages with a high hub score are expected to be good “navigation pages”, and those with a high authority score good “content pages”. These scores are mutually reinforcing and are obtained as the result of a converging iterative process. In our talk, we introduce a new concept of similarity between nodes in graphs. Associated to two directed graphs GA and GB we construct a similarity matrix S whose entry (i,j) expresses how similar vertex i (in GA) is to vertex j (in GB). The potential applications of our similarity matrix for information retrieval purposes are manifold. For a particular graph GA with two nodes the similarity matrix gives the hub and authority web-page scores of Kleinberg; and for a three nodes graph the similarity matrix is appropriate for searching synonyms in a dictionary graph. For this last applications, we report results obtained on the English dictionary “Webster” and on the French dictionary “Le Petit Robert”. The concept of similarity matrix is joint work with Paul Van Dooren. The application of this concept to synonym extraction is joint work with Pierre Senellart. *** Plenary Lecture II Coordinated control of multi-agent systems Professor Luc Moreau RUG-SYSTeMS Abstract This talk focuses upon the control of groups of agents (mobile robots, underwater vehicles, etc.). Several formation control maneuvers are considered, where the agents are required to assume a prescribed position with respect to each other. The design methodology presented in this talk is of a decentralized nature. Instead of having a leader in the group, all agents are considered to be equal, thus increasing the robustness of the group with respect to failure of individuals. Special attention goes to the issue of bandwidth constraints, which limits the communication between individual agents. *** Poster Session I Augmented barriers for self-scaled cones Michel Baes UCL-INMA Abstract We handle a recently studied object in interior-point method for convex programming: the augmented barrier, which is a self-concordant barrier $F$ for a regular cone $\K$ augmented by a quadratic form $\langle Qx,x\rangle$. Strangely enough, the problem of finding the analytic centre of such a function appears to be almost universal. What is more, when we deal with the positive orthant or with the psd cone, if $Q\K\subseteq \K$, we can solve that problem with a complexity depending only on the parameter of $F$. We think it will be possible to generalise these results to all the class of so-called self-scaled cones. We have already achieved a complete treatment of the Lorentz cone case, getting, as side results, an explicit caracterisation of the automorphisms of the Lorentz cone and a complete complexity study of the problem of minimisation of a quadratic function on the unit ball. ***** Dynamics and computation; Complexity and control Jean-Charles Delvenne UCL-INMA Abstract Two main lines are currently followed. First, we would like to define under which conditions a family of dynamical systems can be chosen as a model of computation, and on the other hand find which properties of dynamical systems are undecidable. Second, we attempt to answer the question: which systems are difficult to control? More precisely, we wish to define a measure of complexity of control, with the help of computer science. ***** Model reduction of mechanical systems Damien Lemmonier UCL-INMA Abstract The current objective of my research is to provide methods of constructing reduced models of large scale linear mechanical systems that preserve the “second-order structure” of the original system. Because the systems we deal with are LTI, we could use the popular “balanced truncation” reduction method, which gives nearly optimal errors (in some sense). But this method does generally not preserve the mechanical structure. I therefore intend to develop methods that approximate the balanced truncation in such a way that the reduced model is a mechanical one and that the induced reduction error remains small. Keywords: Model reduction, Balanced truncation, Hankel singular values, Projectors, Operators theory. ***** The influence of cognitive cues on the predictive control of eye movements Jean-Jacques Orban UCL-INMA Abstract The orientation of the visual axis in space is an important function because it determines the information that is provided by the visual system. Classically, the neural control of eye movements has been investigated in response to very simple and unrealistic visual stimuli (e.g. laser spots). In this condition, it is essentially the reflexive mechanisms that are brought to light. In this thesis, we will investigate the predictive mechanisms that are used by the oculomotor system in response to more complex and realistic visual stimuli. In this condition, the eye movements are influenced by both the physical nature of the stimulus (size, color, speed) and associated cognitive cues (type of object, expected future trajectory, etc). ***** The role of the visual feedback on the eye-hand co-ordination during circular arm movements with a hand-held load in different gravitational fields Olivier White, Philippe Lefèvre (CESAME) and J.L. Thonnard (READ) UCL-INMA Abstract In this experiment, we would like to study the eye-hand co-ordination while subjects manipulate an object following a circular trajectory in novel gravity fields. The coupling between the grip force, normal to the grasped object, and the tangential force will be studied when the vectorial direction of the object’s acceleration is continuously changed in relation to the gravity during circular arm movements. When one moves an object in the frontal plane following a circular trajectory at a constant speed, the object is subjected to the vertical gravitational acceleration and to the centripetal acceleration. The tangential force tends to make the object slip out of the fingers. In order to restrain the object, the normal grip force has to be adequately adjusted to the tangential force fluctuations. We are particularly interested in eye movements and the role played by visual feedback when performing this task in new gravity fields induced by parabolic flights (0G, 1G, 2G). We will record three sets of parameters. The parameters of movement dynamics (ATI 3D forces and torques transducer) will measure how the subject is able to anticipate the tangential force, which depends both on the gravity and the acceleration of the upper limb. The kinematics parameters measured with an OptoTrak 3020 system will indicate whether the subject is able to perform circular movements in altered gravitational environments with and without visual feedback. The eye movements measured with skalar system will report the strategy of the subject to follow the object’s trajectory. ***** The development of statistical models for the analysis of micro-array data Joke Allemeersch, Bart De Moor & Yves Moreau K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract Micro-array technology allows us to analyze the activity of genes in biological samples on a massive scale. The analysis of such data requires rigorous statistical methods to deal with important problems like noise, reproducibility. The goal of this Ph.D. is to investigate the whole range of statistical techniques, like experimental design, power analysis, statistical testing, variance-analysis, etc. on micro-array data. In particular, a major goal is to extend current statistical techniques to the analysis of micro-array time-course experiments. Techniques such as dynamic programming, linear system modelling, and repeated measurement techniques will be investigated in this context. ***** The application of advanced techniques for system identification to study side channels for cryptographic algorithms: cryptanalysis and design Evelyne Dewitte & Bart De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract To protect information against active or passive eavesdropping, one uses cryptographic algorithms that have to offer a solid level of security. This security is threatened by a new kind of attacks, so called side-channel attacks that use weaknesses in the implementation of the algorithms. We want to combine two previously unrelated subgroups, SISTA and COSIC to discover the limits of these attacks by using system identification and modelling tools on a MIMO model of the cryptographic system. Once these limits are known, one can consider countermeasures that are robust against the attacks. ***** High-throughput’ statistical analysis of microarray data Steffen Durinck & Bart De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract Microarrays form a powerful technique for functional genomics studies. We will develop methods and applications which will enable analysis of thousands of microarray experiments such as an advanced pre-processing step were a diagnosis tool is coupled with a systematic normalization procedure, methods to merge datasets from different origin, allow complex queries on the data, built a gene specific noise model and transcriptome maps. ***** Applied Nonlinear Time Series Analysis Marcelo Espinoza & Bart De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract In the general context of modelling and forecasting of time series, important concepts coming from statistics and econometrics domains can be applied and further developed within the nonlinear modelling framework. This research will address these issues in order to develop practical models for the forecasting of electricity load and financial time series in general, in terms of input selection, data pre-processing, structure identification and time series dynamics. ***** Context-driven mining of literature for intelligent knowledge management with large-scale experiments in the domain of functional genomics Frizo Janssens, Bart De Moor & Yves Moreau K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract To prolong the lifecycle of data coming from complex, expensive and large experiments in molecular biology, a knowledge management system with an intelligent data structure and querying possibilities will be developed. Context-driven text mining techniques will enable automatic retrieval and management of information coming from heterogeneous sources on the Internet. ***** Comparative study of the composition and evolution of bacterial regulons Pieter Monsieurs & Prof. B. De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract The composition of bacterial regulons differs significantly amongst bacterial species. During the first stage of the project the composition of bacterial regulons will be compared amongst enteropathogenic species. During a second stage we will try to identify the evolutionary processes that contributed to the detected variation in regulon composition. The developed methodology will be validated using two test-systems, the PhoPQ- and the FNR-regulon. ***** Kernelbased Modelpredictive Control Bert Pluymers, Bart De Moor & Johan Suykens K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract Modelbased Predictive Control (MPC) is a control technique being used in wide sectors of industry in its linear form. When using nonlinear models however additional complications arise of which the most important is the fact that the involved optimization problems generally become non-convex, which prohibits the online implementation of the controller. In this project will be examined in which way kernelbased methods can be used for constructing models of nonlinear systems for use in MPC, in which way methods of convex relaxation, used in classic neural network models, can be extended to these models and in which way robustness can be incorporated into these schemes. ***** Microarray Data Analysis using Support Vector Machines and Kernel Methods Nathalie Pochet, Bart De Moor & Johan Suykens K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract The objective of this research is to find an optimal strategy for analyzing microarray data by using support vector machines and kernel methods. An important application area is oncology. Microarray data can be analyzed in three different ways: the discovery of diagnostic classes and genes with a similar behaviour, the performance of clinical and biological predictions, and the discovery of relevant genes and groups of genes. ***** Model generation and reduction for acoustic phenomena Bert Schiettecatte, Axel Nackaerts & Bart De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract Physical sound models are being used more and more to calculate synthetic sound for real-world phenomena, for example, to experience the acoustic qualities of a room or to put a sound source in virtual space. The goal of this research project is to find a translation mechanism between a 3D description of a musical instrument or environment and a computationally efficient model. We hope to rely on psycho-acoustics to reduce the complexity of both the 3D descriptions and the generated model. ***** Development of a strategy based on phylogenetic footprinting for the identification of regulatory elements in eukaryotic promoters Ruth Van Hellemont, Bart De Moor & Y. Van de Peer K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract The goal of this research is the development of a generic procedure for phylogenetic footprinting for identification of regulatory motifs in eukaryotic sequences. This will consist of two sub-goals: first, a methodology will be developed for the generation of a data set appropriate for phylogenetic footprinting. In a second phase, the methodology for phylogenetic footprinting will be optimised. The developed methodology will be validated by a test set of hox genes. ***** Design and evaluation of DSP algorithms for feedback cancellation in Public Address systems (Ontwerp en evaluatie van digitale signaalverwerkingsalgoritmen voor feedbackonderdrukking in 'Public Address'-systemen) Toon van Waterschoot & Marc Moonen K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract In a Public Address (P.A.) system the loudspeaker sound is often fed back into the microphone. This may result in system instability that is perceived as “howling”. State-of-the-art solutions use notch filters in the forward path between microphone and loudspeaker. Our research will focus on another approach, inspired by a similar technique in hearing aids. This approach aims at feedback cancellation by estimating the feedback signal using an adaptive filter. ***** Internal Normalisation in LA-ICP-MS & Non-Linear Growth Rates in Biota Fjo De Ridder(1), Anouk Verheyden(2), Phillippe Willenz(3), Frank Dehairs(4), Johan Schoukens(1), Rik Pintelon(1) (1) Department of Electricity and Instrumentation, Vrije Universiteit Brussel, Pleinlaan, 2, 1050 Brussels, Belgium, [email protected] (2) Laboratory of General Botany and Nature Management, Vrije Universteit Brussel (3) Department of Invertebrates, Royal Belgian Institute of Natural Sciences, Rue Vautier 29, 1000 Brussels, Belgium (4) Analytical and Environmental Chemistry Department, Vrije Universteit Brussel Abstract 1. Due to fluctuations in the laser intensity and differences in the density of the sampled material, the signal of an LA-ICP-MS instrument is not proportional to the concentration of a specific element. This effect is known as drift and is usually corrected by referring the analyte signal to the one of an internal standard. The latter is an element homogeneously distributed throughout the whole sample. Next, the drift pattern will be reflected in the measured pattern for this element and the signals of all other elements can be compensated for this estimated drift pattern. When this strategy was employed on measurements of a sclerosponge (a calcium carbonate secreting marine sponge), it was found that different internal standards (Ca, Sr, U, …) led to different drift patterns. We developed a weighted least square estimation of the drift pattern, based on the measurements of multiple internal standards. Weighing emphasises the more precise measurements and thus estimates the most probable drift pattern from all internal standards at once. The use of more internal standards enabled us to have an internal quality check. 2. For specific biota the record of a feature (i.e. a proxy) along a growth axis can reflect (changing) environmental conditions experienced during lifetime of the organisms. For example, the density of vessels in trees, are related to ambient environmental conditions. For a mangrove tree it was attempted to partly reconstruct these environmental conditions by analyzing vessel density along the growth axis. When assuming linear growth rate (in this case 1.3mm/year) this led to a reasonable error in the matching of environmental conditions with vessel density. The second part of this poster describes an estimation of this non-linear growth rate based on methods used to characterize time base distortions in high frequency sampling scopes. ***** Powell-Sabin splines Jan Maes KUL-NUMERICS Abstract Piecewise polynomials on triangulations, and Powell-Sabin (PS) splines in particular, form an attractive alternative for the widely used tensor product splines. In this poster we give an overview of the research on Powell-Sabin spline surfaces. We present a normalised B-spline representation for PS splines. With this representation we can form a set of control triangles which have a nice geometric interpretation. A subdivision scheme for PS splines is introduced both on uniform and general triangulations. New wavelets have been generated by using these subdivision schemes as the prediction step in the lifting scheme. We also show some algorithms for data fitting and surface interrogation. Finally we give some suggestions for further research. ***** DDE-BIFTOOL, a software package for the bifurcation analysis of Delay Differential Equations G. Samaey, D. Roose, K. Engelborghs, T. Luzyanina KUL-NUMERICS Abstract DDE-BIFTOOL is a Matlab-based software package for numerical bifurcation analysis of delay differential equations with fixed and/or state-dependent delays. The package contains procedures for stability analysis of steady state solutions of DDEs, computation of periodic solutions and their stability (using a collocation approach) and computation of homoclinic and heteroclinic orbits. We illustrate the capabilities of DDE-BIFTOOL for the analysis of mathematical model from variuous application areas. ***** Macroscopic analysis of microscopic evolution laws Pieter Van Leemput KUL-NUMERICS Abstract Looking for models that capture the complexity of dynamical systems, the microscopic Lattice Boltzmann model can be considered as a viable alternative to the macroscopic description of the system by means of a set of partial differential equations. A macroscopic time integrator that uses the microscopic Lattice Boltzmann evolution law instead of a macroscopic one, is constructed. Our ultimate goal is the derivation of the important macroscopic properties of the dynamical system using this time integrator. For example, the stability of the system can be analyzed accurately by computing the largest eigenvalues from the spectrum of the macroscopic time integrator. Also, the macroscopic time integrator can be coupled to the Newton-Picard method to allow for the continuation of solution branches and a bifurcation analysis of the system. The one dimensional FitzHugh-Nagumo reaction-diffusion system is considered as an example. Other microscopic systems, like cellular automata, will be considered as well. ***** Numerical bifurcation analysis of large-scale delay differential equations Koen Verheyden (1) Scientific Computing Research Group, Department of Computer Science, K.U.Leuven.ULg-SYST Abstract This Ph.D. thesis is about the bifurcation analysis of large-scale delay differential equations (DDEs). More precisely, the design and the implementation of efficient and robust iterative methods. Delay differential equations are used more and more as a modelling tool in, e.g., control theory and population dynamics. The delay terms take “memory phenomena” such as latency into account. Fixed point delay models as well as continuous delay models are being used. The numerical treatment of both cases will be considered in this thesis. For the continuous delay case – where the delay is given in the form of an integral term – adapted discretization techniques will be studied. Another key issue is the use of appropriate iterative numerical linear algebra techniques. We have already implemented an iterative method for the computation of periodic solutions of DDEs. The initial value problem for a DDE requires the function to be given on an entire interval. Thus the computation of periodic solutions always results in large nonlinear systems because the periodicity condition has to be discretized on an initial and a final interval. We use single shooting on the linearization of the DDE about a proposed trajectory. The time integrator we use is the implicit Runge-Kutta scheme corresponding to the Gauss-Legendre collocation method. One linear single shooting problem – corresponding to a Newton step – is solved by iterative refinement. The resulting linear systems are approximately solved by the Newton-Picard method. This iterative method exploits the fact that there are only a few semi-stable or unstable Floquet multipliers. Our implementation is done in DDE-BIFTOOL, a Matlab package for the numerical bifurcation analysis of DDEs developed at the Scientific Computing Research Group. (1) Research Assistant of the Fund for Scientific Research - Flanders (Belgium). ***** Experimental Study and Modelling of the Neural Control of Juggling Renaud Ronsse Montefiore Department - Université de Liège (ULg) Abstract This research aims at a better understanding of the key control parameters that enable a human to perform simple juggling tasks. Concurrent research work is studying the dynamics and mathematical control of this task (see M.Gerard, ULg). This project will confront theoretical predictions based on some conclusions given by mathematical results to an experimental study of human juggling. Juggling is here understood in the broad sense of a task that is inherently unstable, thus requiring some amount of feedback, and that involves rhythmic coordination. For a human juggler, feedback can be given by a visual or a tactile information. ***** Modeling the competitive growth of Listeria innocua and Lactococcus lactis M. Antwi, K.M. Vereecken and J.F.M. Van Impe BioTeC-Bioprocess Technology and Control, Katholieke Universiteit Leuven Kasteelpark Arenberg 22, B-3001 Belgium Abstract The knowlegde of antagonistic interaction phenomena in mixed microbial cultures is essential for microbial safety and shelf life estimation of food products. In this work, the growth of Listeria innocua and Lactococcus lactis in a modified Brain Heart Infusion medium was studied in pure and mixed cultures. An existing single species model is used to describe the experimental data. The model provides a reasonable description of the mixed population growth studied but the validity region may be limited. In further research, a novel model that includes the factors affecting the micro-organisms' metabolism will be applied. ***** Modeling the onset of filamentous bulking based on image analysis information in wastewater treatment systems E.N. Banadda, R. Jenne, I. Smets and J.F. Van Impe BioTeC-Bioprocess Technology and Control, Katholieke Universiteit Leuven Kasteelpark Arenberg 22, B-3001 Belgium Abstract Filamentous bulking is a problem widespread in the operation of activated sludge process. A fully automated image analysis method for recognizing and characterizing flocs and filaments in activated sludge images has been developed. It is the aim of this work to seek correlations between image analysis information with classical measurements and to investigate whether image analysis information can be used to predict the onset of filamentous bulking with aid of black box models. ***** Application of a novel procedure to quantify thermal inactivation kinetics (Part II) Valdramidis V.P., Bernaerts K., Geeraerd A.H. and Van Impe J.F*. BioTeC-Bioprocess Technology and Control, Katholieke Universiteit Leuven Kasteelpark Arenberg 22, B-3001 Belgium Abstract The concept of predictive microbiology is that a detailed knowledge of the responses of microorganisms to environmental conditions enables objective evaluation of effects of processing, distribution and storage operations on the microbiological safety and quality of food. Kinetic parameters and models are the main tools provided for the implementation of predictive microbiology for the different preservation processes. Among these processes, thermal treatment is a preservation process that has been practiced for more than five thousands years and is undoubtedly the method most widely used in food industry to inactivate microorganisms. Inactivation kinetics is of high interest for the determination of the amount of microbial inactivation. This paper deals with estimation of thermal microbial inactivation kinetics using the methodology of optimal experiment design and the advanced processing technique as presented in Bernaerts and Van Impe (2002) [Food Micro 2002]. Time and temperature parameters are estimated, namely, the decimal reduction time (D-value) and the thermal resistance constant (z-value). Integrated into the Bigelow model they can yield predictions for the specific inactivation rate at certain temperatures. E.coli K12, grown in Brain Heart Infusion, is chosen as a surrogate for the food borne pathogen E.coli O157:H7. Survival data of the microorganism are collected and processed according to the optimal experiment design. The population density data are described accurately by a dynamic inactivation model combined with the Bigelow model through nonlinear regression. The parameter estimation accuracy on D and z is assessed by construction of joint confidence regions. The magnitude of the estimated parameters is in a relevant range with previously published data. Furthermore, the parameters were estimated by much less experimental effort while in literature, a higher number of temperature series is applied. Finally, the joint confidence region of the best-fit parameter estimates when fitting the processed data is highly satisfactory. Acknowledgements: This research is supported by the Research Council of the Katholieke Universiteit Leuven as part of projects OT/99/24 and IDO/00/008, the Institute for the Promotion of Innovation by Science and Technology (IWT), the Fund for Scientific Research – Flanders (FWO) as part of project G.0213.02, the Belgian Program on Interuniversity Poles of Attraction and the Second Multiannual Scientific Support Plan for a Sustainable Development Policy, initiated by the Belgian State, Prime Minister’s Office for Science, Technology and Culture, and the European Commission as part of project EU QLK1-CT-2001-01415. The scientific responsibility is assumed by its authors. Poster Session II LQ-Optimal Temperature Regulation for Nonisothermal Plug Flow Reactor I. Aksikas, J.J. Winkin, D. Dochain UCL-INMA Abstract The Linear-Quadratic optimal temperature regulation problem is studied for a nonisothermal plug flow tubular reactor model. The problem is solved for the linearized model around a constant temperature profile along the reactor.Then the resulting state feedback is applied to the nonlinear model, and the corresponding closed-loop system performances are analyzed. ***** The Time Course of Compensation for Anticipatory Smooth Eye Movements in a Target Localization Task G. Blohm, M. Crommelinck, M. Missal and P. Lefèvre CESAME and Lab. Neurophysiol., Université catholique de Louvain, Belgium Abstract A target briefly flashed during smooth pursuit eye movements evokes localization saccades that do not compensate for the ongoing smooth eye movement (McKenzie and Lisberger 1986). In this study, we use a flash localization task to investigate the compensation mechanism of anticipatory smooth eye movements. After a fixation (800ms) and a gap period (300ms), 7 human subjects were required to pursue a moving target (40°/s). Repetitive presentation of this stimulus led to robust anticipatory smooth eye movements (10°/s). In 30% of the trials, instead of the ramp, a peripheral 10ms flash appeared ±15° around the current eye position. Subjects were instructed to make saccades to the remembered location of the flash in darkness. We found that, on average, the smooth eye movement lasted until 540ms after the flash. The capture of the target typically required 2 to 4 saccades because the smooth eye displacement (SED) perturbed the orientation to the flash. The compensation of this perturbation was a dynamical process that, on average, started only ~300ms after the flash. At the end of the orientation process (on average 849ms after the flash), the oculomotor system compensated for 70% of the SED (43%-92% across subjects). We conclude that the compensation of the SED must be based on an efference copy of the smooth anticipatory motor command that updates the spatial localization of the flashed target. In addition, there is a 300ms delay in this compensation that includes the time necessary to program and execute saccades but also for the integration of anticipatory motion. Supported by FNRS, SSTC and FSR (Belgium). ***** Observability analysis of a nonlinear tubular bioreactor Cédric Delattre UCL-INMA Abstract In this poster an observability analysis is performed for an axial dispersion tubular bioreactor. It involves one growth reaction that follows the nonlinear Monod law. As a first step, it is assumed that the biomass is constant. In this case, the process can be described by a semilinear parabolic Partial Differential Equation (PDE). More precisely, the analysis is performed on a tangent linearized model, that is described by a linear PDE with a spatial-dependent coefficient. It is reported that the associated linear infinite-dimensional operator is a Sturm-Liouville operator, a Riesz-spectral operator and generates a $C_{0}$-semigroup. Then it is shown that a finite number of dominant modes of the system are observable when the substrate concentration is measured at the reactor output by an appropriate sensor. Open problem: it is now assumed that the biomass varies. How to analyze the observability of this new model? Will both studies give consistent results? ***** Switched Continuous (Hybrid) Model of a High Pressure Food Thawing UCL-INMA Abstract The refrigeration of foodstuffs is an important step in food industries. Recent studies have established that the kinetics of freezing and thawing processes is crucial in the final quality of the food [Chourot, 1997][Chevalier et al., 2000]. Therefore, it is important to master this kinetics. High pressure makes the control of such a process easier by modifying the thermo-physical properties of water. That’s why a high pressure food thawing is considered here. Indeed, such a process involves a complex modelling: distributed parameters and nonlinear. This non-linearity is due to firstly, phase changes (solid, melting, liquid) and secondly, pressure steps. This paper shows that hybrid methodology allows to linearize the model. Some laboratory experiments are carried out to validate the model. ***** Yvan Hachez UCL-INMA Abstract Two important classes of quadratic eigenvalue problems are composed by elliptic and hyperbolic problems. In N. J. Higham, F. Tisseur, and P. M. Van Dooren, Linear Algebra Appl., 351--352 (2002), pp. 455 - 474, the distance to the nearest non-hyperbolic or non-elliptic quadratic eigenvalue problem is obtained using a global minimization problem. This paper proposes explicit formulas to compute these distances and the optimal perturbations. The problem of computing the nearest elliptic or hyperbolic quadratic eigenvalue problem is also solved. Numerical results are given to illustrate the theory. ***** Calculation of elementary flux modes in reaction networks: illustration with CHO cell metabolism Agnès Provost UCL-INMA Abstract When considering a complex reaction network, an interesting question is to characterize the simplest reactions that can connect input and output species. In this poster, we present a method based on elementary flux modes (EFM) which are obtained by computing the convex basis of the kernel of the stoichiometric matrix. An algorithm is presented for the calculation of EFM. The successive steps of the method are illustrated on the network describing CHO cells central metabolism. ***** C. Schreiber, G. Blohm, M. Missal & P. Lefèvre CESAME and Lab. Neurophysiol., UCL, Brussels, Belgium Abstract During visual tracking of a moving stimulus, primates orient their visual axis by combining smooth pursuit and catch-up saccades. A quantitative analysis of catch-up saccades has been done for horizontal movements (de Brouwer et al., 2002). In this study, we investigate the properties of catch-up saccades made to visual stimuli moving in two dimensions (2D). We measured 2D eye movements in 4 human subjects (search coil technique). Each trial started with a 2D Rashbass step-ramp stimulus (velocity [10..20 deg/s], direction [0..360 deg], duration [600..1100 ms]). This was followed by a second step-ramp of the target [500..700 ms]. Both the direction of the position step (PS, [-10..10 deg]) and the velocity step (VS, [-40..40 deg/s]) varied randomly in 2D. We analysed the first catch-up saccades after the second step of the target. We found that catch-up saccades to the second ramp were characterized by latencies as short as 100 ms with respect to the second target step (mean 185 ms). The average gain of catch-up saccades was 0.84 (n=1175). However, when data were restricted to moderate values of retinal slip (RS<20 deg/s), the average gain was 0.9 (n=569). A multiple linear regression analysis was performed to find the parameters determining the amplitude of catch-up saccades. For both horizontal and vertical components of saccades, we found the best correlation with position error (PE) and RS as independent variables: Amp=0.9*PE+0.11*RS (R>0.97, n=569). We conclude that both position error and retinal slip are taken into account for programming 2D catch-up saccades. Supported by FNRS, SSTC and FSR (Belgium). ***** Optimal pre-filtering in Iterative Feedback Tuning Gabriel Solari UCL-INMA Abstract IFT is a method to tune a controller performing special experiments on the plant with periods where the operating conditions are normal. We present a method to improve the convergence of IFT by pre-filtering the input data used in the special experiment. The optimal pre-filter is computed at every iteration from data collected under normal operating conditions of the plant. ***** TOUCAN: Deciphering the Cis-Regulatory Logic of Coregulated Genes Stein Aerts, Gert Thijs, Bert Coessens, Mik Staes, Yves Moreau, & Bart De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract Toucan is a Java application for the rapid discovery of significant cis-regulatory elements from sets of coexpressed or coregulated genes. Biologists can automatically (1) retrieve genes and intergenic regions, (2) identify putative regulatory regions, (3) score sequences for known transcription factor binding sites, (4) identify candidate motifs for unknown binding sites, and (5) detect those statistically over-represented sites that are characteristic for a gene set. ***** Integration of Datamining techniques in a CPI environment (BASF) Steven Bex & Bart De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract ESAT-SCD has an extensive collection of datamining algorithms, pre-processing techniques, and other modelling and analysis tools to go from large data sets to useable knowledge. However, most of these are either for very specific applications, or are centered in the Bio-informatics domain. BASF on the other hand, has experience with datamining for specific problems, but does not use the newest algorithms and cannot assess which algorithm is best used where. Hence this project, in which a feasibility study/integration will be done of SCD-techniques in a BASF environment. ***** Building a Semantic BioScape Bert Coessens, Janick Mathijs and Bart De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract Biology is a knowledge-based rather than an axiom-based science. As a consequence, new knowledge has to be inferred from and validated by existing knowledge. But the huge amounts of existing knowledge, it's heterogeneous nature and the fact that it is distributed all over the internet, make this a horrendous task. Biology copes with a problem of diversity and distribution of the information it works with. We propose an architecture that integrates different tools and algorithms for analysis and mining of microarray data and gives them integrated access to heterogeneous information sources. The general concept is not to integrate all the different tools, algorithms and biological information sources in the architecture, but rather construct an integrated view of the information they manage, output and contain in the form of application-based ontologies. The SOAP Web Service technology and DAML-S are used to make different distributed algorithms searchable and interoperable and to hide their specific implementations from the architecture. With this architecture we want to make analysis of microarray data faster and more efficient. ***** The canonical decomposition: Uniqueness, Algorithm, Applications Lieven De Lathauwer, Bart De Moor and Joos Vandewalle K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract The Canonical Decomposition (CD) of a higher-order tensor is the decomposition of that tensor in a minimal sum of rank-1 terms. Unlike the situation for matrices, this decomposition can be unique without orthogonality constraints on the components, and the minimal number of terms (the "tensor rank") can be much bigger than the dimensions of the tensor. In this poster we present a new uniqueness theorem for the CD, which applies to a wide range of tensors for which uniqueness had not been proven yet. The theorem is constructive; it shows that the components may be obtained from a simultaneous congruence transformation. Applications can be found in factor analysis, harmonic retrieval, underdetermined independent component analysis, blind identification, state-of-the-art DS-CDMA techniques, etc. ***** Using Literature and Data to Annotate and Learn Bayesian Networks Geert Fannes, Patrick Glenisson & Bart De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract The increasing availability of electronic literature opens up the possibility of using it as prior knowledge when dealing with complex statistical models where data is scarce or high levels of noise are present. This raises the question of how to actually perform the integration of domain literature with statistical data. In this paper, we assume that our textual information consists of short free-text descriptions of the domain variables and, optionally, of a large repository of related domain literature while the statistical data are classical vectors of observations. Because of their explicit semantics and powerful computational methods, Bayesian networks are suitable candidates for the integration of prior domain knowledge and statistical data. To connect our computational model with textual domain knowledge, we define an extended representation of Bayesian networks called the Annotated Bayesian Network. We introduce a text-based prior for the evaluation of Bayesian network substructures. We derive a text-based prior distribution over the space of Bayesian network structures and update this to a posterior with data. We evaluate our methodology in the ovarian cancer domain by first indicating the correlation between the text-based and data-based scores for substructures. Next, we compare the performance of our newly introduced text-based prior for Bayesian network structures in classifying ovarian tumors versus the performance of a Bayesian network with a uniform structure prior. We show that in the case where only a small number of samples is available (which is a common situation in medical and biological applications), the prior derived from the literature increases the classification performance. ***** Soccer Mining: Analysis and Design Jelle Geerits, Emil Muresan & Bart De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract This poster presents the ESAT-SCD soccer analysis and data mining project. Starting from four videos of a soccer game, taped with static cameras from four different corners, the 2D coordinates of the players on the field and the 3D coordinates of the ball are retrieved and stored in a database. Starting from this database, statistics and analyses can be performed. Trivial statistics, like average speed, acceleration, total run distance and field coverage of each player, are visualised. Also more complex analyses are explored, like: detection of game strategy, prediction of best actions in a given situation, offside detection, referee positioning with respect to the game, modelling power consumption, correlation diagrams with physiological measurements such as heart beats, etc. In addition, we work on automated label generation (close-to-offside, danger-for-counter-attack, best-pass-opportunity, etc) to be integrated in VideoCoach, another tool that we have developed. This project turns out to be truly multi-disciplinary: Player/ball tracking via Kalman and/or particle filtering, computational geometry (player polygons), ball track fitting (orthogonal distance regression, curvature and torsion (Frenet-Serret) coordinates), optimal control theory (MPC for optimal defense determination) and game theory (best attack under uncertainty). Of course, we'll show some game sequences. ***** Subspace regressionin reproducing kernel Hilbert space L. Hoegaerts, J.A.K. Suykens, J. Vandewalle & B. De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract We focus on three methods for finding a suitable subspace for regression in a reproducing kernel Hilbert space: * kernel principal component analysis, kernel partial least squares and kernel canonical correlation analysis, and * we demonstrate how this fits within a more general context of subspace regression. For the kernel partial least squares case a least squares support vector machine style derivation is given with a primal-dual optimization problem formulation. The methods are illustrated and compared on a number of examples. ***** A Case Study on Traffic Flow Modelling, Simulation and Control Sven Maerivoet, Tom Bellemans and Bart De Moor Department of Electrical Engineering, ESAT-SCD (SISTA), Katholieke Universiteit Leuven Abstract In our case study, we observe the traffic data (as collected by the Traffic Centre in Wilrijk) of the E17 highway in the direction of Antwerp. Interpretation of this time-series leads to the well-known fundamental diagrams from traffic flow theory (these diagrams exhibit the metastability and hysteresis phenomena). As a macroscopic traffic flow model, we use Papageorgiou's METANET model. The microscopic model is in one case based on a continuous implementation and in another case on a discrete version implemented as a traffic cellular automaton. As a control measure, we investigate the use of ramp metering using model predictive control, which outperforms the default ALINEA-algorithm. ***** Robust cross-validation score functions for kernel based function estimation Kristiaan Pelckmans, Jos De Brabanter, Johan Suykens & Bart De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract This research focuses on methods for tuning of the hyperparameters for function estimation in the case of non-Gaussian noise or outliers on the output data. Although the classical techniques such as leave-one-out and generalized cross-validation are widely used, they are not able to handle certain classes of contaminated noise models effectively. For this purpose, a robustified version of the L-fold cross-validation and generalized cross-validation are introduced. The problem of optimizing the robust score-function is considered. As empirical evaluations of the score-function on a number of test-examples illustrate, the resulting score-function may suffer from the existence of several local minima. These are mainly caused by variance on the score estimator. A suitable optimization procedure is developed then based on a stochastic approximation method leading to the global minimum of the score function. The ideas are illustrated for weighted LS-SVMs with RBF kernel. The robust cross-validation methods are applied to several toy and real life data sets with improved test set performance measured in different norms. ***** A family of entanglement monotones for mixed 2-bit quantum systems Maarten Van den Nest & Bart De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract We present an infinite class of entanglement monotones for general 2-bit quantum systems (qubits), using the Lorentz singular value decomposition of a 2-qubit density matrix. These entanglement monotones are linear functions of the Lorentz singular values and comprise a generalization of the concurrence, a celebrated entanglement measure for 2-qubit systems. Our family of monotones provides necessary conditions for one state to be convertible to another using local quantum operations and classical communication. ***** Intelligent Information Infrastructure supporting Knowledge Management in a Research and Development Environment Dries Van Dromme, & Bart De Moor K.U.Leuven, Dept. Elektrotechniek, ESAT-SCD (SISTA), Heverlee Abstract In the project “McKnow”, we set up an information system that considers both user profiles and documents (of whatever type, and their metadata) as resources. In our research, it is our intention to also capture relevance feedback, information from interactions of users with documents, and so-called “tacit knowledge” from experts. All these data feed the Data Warehouse, which, coupled with good Data Mining techniques, will yield an automated, user-oriented, dynamic system for knowledge management in a chaotic environment. In particular, using clustering algorithms, we've been grouping the documents from ESAT-sista publications to see if the clusters reflect the Research Group's Divisions in both scientific topics and composition by its members (= users = researchers). To be able to do this, we had to develop an automated text extraction and indexing method. Considerable effort has gone into these Text Mining aspects, and we will briefly discuss them. Next, the results of the Clustering efforts will be discussed and shown. ***** Identification of Young's Modulus from Broadband Modal Analysis Experiments R. Pintelon(1), P. Guillaume(2), K. De Belder(1) and Y. Rolain(1) (1)Vrije Universiteit Brussel, dept. ELEC, Pleinlaan 2, 1050 Brussels, BELGIUM (2)Vrije Universiteit Brussel, dept. WERK, Pleinlaan 2, 1050 Brussels, BELGIUM Abstract The stress-strain relationship of linear viscoelastic materials is characterized by a complex-valued, frequency dependent elastic modulus (Young's modulus). Using system identification techniques it is shown in this paper how can be measured accurately in a broad frequency band from forced flexural (transverse) and longitudinal vibration experiments on a beam under free-free boundary conditions. The advantages of the proposed method are (i) it takes into account the disturbing noise and the nonlinear distortions, (ii) is delivered with an uncertainty bound, (iii) the low sensitivity to non-idealities of the experimental set up, and (iv) the ability to measure lowly damped materials. The approach is illustrated on several practical examples: brass, cupper, plexiglass and PVC. ***** New results concerning the new modified Newton iteration for Toeplitz matrices Gianni Codevico KUL-NUMERICS Abstract The classical Newton iteration method for matrices can be modified into an efficient algorithm when structured matrices are involved. The difficulty, however, is the importance of the choice of the starting matrix and the compressed form of the inverse approximation. With this poster, we want to show our recent progresses on the new modified Newton iteration. The validity of the approach is illustrated by numerical experiments based on our C++ library for Toeplit-like systems. ***** Divide and conquer algorithms for computing the eigendecomposition of diagonal-plus-semiseparable matrices Ellen Van Camp KUL-NUMERICS Abstract First we construct a new algorithm with quadratic computational complexity to tridiagonalise a symmetric diagonal-plus-semiseparable matrix. From this algorithm we derive two divide and conquer algorithms (a one-way and a two-way algorithm) for calculating the eigendecomposition of a symmetric diagonal plus semi-separable matrix. Keywords: diagonal-plus-semiseparable matrices, eigendecomposition, divide and conquer algorithms, fast and stable algorithms. ***** An algorithm for computing the singular values based on semiseparable matrices Raf Vandebril, Marc Van Barel & Nicola Mastronardi KUL-NUMERICS Abstract The poster presents a new algorithm for finding the singular values of an m by n matrix A. The traditional algorithm based on bidiagonal matrices and the implicit application of the QR algorithm are explained. The new algorithm is based on semiseparable matrices instead of bidiagonal ones. In every step a comparison is made between the traditional and this new approach. The traditional reduction to an upper bidiagonal is replaced with a reduction to an upper semiseparable matrix. The implicit QR algorithm for the bidiagonal matrices is translated towards the semiseparable case. Numerical experiments are shown, comparing the accuracy and the timings of the traditional algorithm with the new algorithm. ***** Stabilization of Planar Juggling Patterns Manuel Gérard ULg-SYST Abstract This poster is devoted to the stabilization of planar juggling trajectories. A juggling pattern is defined by the periodic motion of a mass-point bouncing between the edges of a rectangular billiard. The model assumes Newtonian free flight dynamics and elastic collisions. The angular position and angular momentum of one of the edges are actuated. The stabilization problem is analyzed by studying the discrete return map of the ball on this edge. When the system is uncontrolled, two physical quantities are shown to be conserved at the instants of impact: the energy of the ball and its velocity component orthogonal to the surface. Based on these invariant quantities, discrete control laws are designed to stabilize a specified juggling figure. It is argued this control methodology achieves large basins of attraction, simplifies the closed-loop analysis and applies to a wide range of juggling configurations. ***** Image analysis as a monitoring tool for activated sludge properties R. Jenné, E.N. Banadda and J.F. Van Impe BioTeC - Bioprocess Technology and Control, Katholieke Universiteit Leuven Kasteelpark Arenberg 22, B-3001 Leuven, Belgium Abstract A filamentous bulking problem occurs when filamentous bacteria overgrow (well-settling) floc forming bacteria. In this work, a fully automatic image analysis procedure was developed to monitor the activated sludge properties in a lab- scale installation. In this way, the potential of image analysis as an early warning system for filamentous bulking was evaluated. ***** Towards a new generation of simple models for microbial growth Poschet F., Vereecken K.M., Geeraerd A.H. and Van Impe J.F.* BioTeC - Bioprocess Technology and Control, Katholieke Universiteit Leuven Kasteelpark Arenberg 22, B-3001 Leuven, Belgium Abstract Food safety and quality are largely determined by the proliferation of pathogenic and spoilage micro-organisms during the life cycle of the product (i.e., from the start of the production process until consumption). In order to simulate and predict the microbial evolution in foods, mathematical models are developed in the field of predictive microbiology. Microbial growth normally passes 3 main periods: first a lag phase during which the cells adapt to their new environment, followed by an exponential growth phase during which the cells multiply exponentially, and finally a stationary phase during which a maximum population density is reached. The growth model of Baranyi and Roberts, which is considered to be the most widespread growth model, describes these 3 phases in microbial growth by means of the following 2 differential equations: N [CFU/mL] is the microbial cell concentration, mmax [1/h] the maximum specific growth rate and Q [-] characterises the physiological state of the microbial cells. The factor Q/(1+Q) is called the adjustment function and describes the lag phase by means of the physiological state of the cells. The factor mmaxN describes the exponential growth phase. The (1-N/Nmax) factor, the inhibition function, describes the transition to the stationary phase. A major disadvantage of the Baranyi and Roberts growth model is the lack of mechanistic knowledge in the inhibition function and its poor extendibility to interactions between different microbial species, microbial growth in structured media, etc. In our new modelling concept, the inhibition function of the Baranyi and Roberts growth model is replaced by a more mechanistically based function. The stationary phase is assumed to be a result of an increasing inhibition by a toxic metabolite produced by the cells. After a careful structural analysis, the new model is applied to a large set of experimental data of Escherichia coli K12. Both aspects are compared with the growth model of Baranyi and Roberts. The new model has a more mechanistically founded basis as compared to the model of Baranyi and Roberts and is by consequence easier to extend to more realistic (and more complex) situations (e.g., microbial interactions and growth in a structured environment), while maintaining similar predictive value. Acknowledgements: This research is supported by the Research Council of the Katholieke Universiteit Leuven as part of projects OT/99/24 and IDO/00/008, the Institute for the Promotion of Innovation by Science and Technology (IWT), the Fund for Scientific Research–Flanders (FWO) as part of project G.0213.02, the Belgian Program on Interuniversity Poles of Attraction and the Second Multi-annual Scientific Support Plan for a Sustainable Development Policy, initiated by the Belgian State, Prime Minister’s Office for Science, Technology and Culture, and the European Commission as part of project QLK1-CT-2001-01415. The scientific responsibility is assumed by its authors. ***** Analysis and evaluation of a serial dilution experimental protocol by means of a simulation model A.R. Standaert, A.H. Geeraerd, K. Bernaerts, K. Francois, F. Devlieghere, J. Debevere and J.F. Van Impe BioTeC - Bioprocess Technology and Control, Katholieke Universiteit Leuven Kasteelpark Arenberg 22, B-3001 Leuven, Belgium Abstract Serial dilution experiments are a standard way of obtaining single cells for study of, in this case, individual cell lag times. A simulation model for the serial dilution experimental protocol is constructed providing further insights in the experimental process, and is used to suggest improvements of the experimental protocol.
# Moduli spaces of vector bundles over a Klein surface Posted in Speaker: Florent Schaffhauser Affiliation: IHES/MPI Date: Thu, 2010-02-04 15:00 - 16:00 Location: MPIM Lecture Hall Parent event: MPI-Oberseminar A compact topological surface S, possibly non-orientable and with non-empty boundary, always admits a Klein surface structure (an atlas whose transition maps are dianalytic). Its complex cover is, by definition, a compact Riemann surface X endowed with an antiholomorphic involution which determines topologically the original surface S. In this talk, we relate dianalytic vector bundles over S and holomorphic vector bundles over X, devoting special attention to the implications this has for moduli spaces of semistable bundles over X. We construct, starting from S, Lagrangian submanifolds of moduli spaces of semistable bundles of fixed rank and degree over X. This relates the present work to constructions of Ho and Liu over non-orientable compact surfaces with empty boundary. © MPI f. Mathematik, Bonn Impressum & Datenschutz
# Deceptive uniform convergence question 1. Mar 31, 2013 ### Zondrina 1. The problem statement, all variables and given/known data http://gyazo.com/55eaace8994d246974ef750ebeb36069 2. Relevant equations Theorem III : http://gyazo.com/af2dfeb33d3382430d39f275268c15b1 3. The attempt at a solution At first this question had me jumping to a wrong conclusion. Upon closer inspection I see the sequence converges to 1 as n goes to infinity for |x|<1. The sequence converges to 0 as n goes to infinity for |x|≥1. Hence the sequence is not uniformly convergent over the whole real line. If we restrict the domain of x to (-1,1) or (-∞,-1] U [1,∞), then we can observe uniform convergence over each interval respectively. The question isn't too clear about what it's asking for, but that's my take. 2. Mar 31, 2013 ### Dick I think you are supposed to conclude that given the theorem, if the $f_n$ are continuous and the limit function is not continuous, then the convergence can't be uniform. And I don't think they converge uniformly on (-1,1) either or any of the other intervals you are talking about. If you think they do please let me know why. 3. Apr 1, 2013 ### Zondrina I see, I think i understand how the theorem and the question relate. I never did check the convergence on the intervals though so I suppose I shouldn't have assumed.
# Detection of Diffuse Interstellar Bands in the z = 0.5 Damped Lyα System toward AO 0235+164* @article{York2006DetectionOD, title={Detection of Diffuse Interstellar Bands in the z = 0.5 Damped Ly$\alpha$ System toward AO 0235+164*}, author={Brian Andrew York and Sara L. Ellison and Brandon L. Lawton and Christopher W. Churchill and Theodore P. Snow and Rachel A. Johnson and Sean G. Ryan}, journal={The Astrophysical Journal}, year={2006}, volume={647} } • Published 18 May 2006 • Physics • The Astrophysical Journal We report the first detection of the 5705 and 5780 A diffuse interstellar bands (DIBs) in a moderate-redshift damped Lyα (DLA) system. We measure a rest-frame equivalent width of 63.2 ± 8.7 mA for the 5705 A feature and 216 ± 9 mA for the 5780 A feature in the zabs = 0.524 DLA toward AO 0235+164 and derive limits for the equivalent widths of the bands at 5797, 6284, and 6613 A. The equivalent width of the 5780 A band is lower than would be expected based on the Galactic correlation of DIB… ## Figures and Tables from this paper Diffuse interstellar bands in z < 0.6 Ca ii absorbers★ The diffuse interstellar bands (DIBs) probably arise from complex organic molecules whose strength in local galaxies correlates with neutral hydrogen column density, N(H i), and dust reddening, A Survey of Diffuse Interstellar Bands in the Andromeda Galaxy: Optical Spectroscopy of M31 OB Stars • Physics • 2011 We present the largest sample to date of intermediate-resolution blue-to-red optical spectra of B-type supergiants in M31 and undertake the first survey of diffuse interstellar bands (DIBs) in this Diffuse Interstellar Bands versus Known Atomic and Molecular Species in the Interstellar Medium of M82 toward SN 2014J • Physics • 2014 We discuss the absorption due to various constituents of the interstellar medium (ISM) of M82 seen in moderately high-resolution, high signal-to-noise ratio optical spectra of SN 2014J. Complex Evidence of Magellanic-like moderate redshift H i-rich galaxies • Physics Proceedings of the International Astronomical Union • 2008 Abstract We present equivalent width measurements and limits of six diffuse interstellar bands (DIBs, λ 4428, λ 5705, λ 5780, λ 5797, λ 6284, and λ 6613) in seven damped Lyα absorbers (DLAs) over the IDENTIFICATION OF NEW NEAR-INFRARED DIFFUSE INTERSTELLAR BANDS IN THE ORION NEBULA* • Physics • 2009 Large organic molecules and carbon clusters are basic building blocks of life, but their existence in the universe has not been confirmed beyond doubt. A number of unidentified absorption features On the source of the dust extinction in type Ia supernovae and the discovery of anomalously strong Na i absorption • Physics • 2013 High-dispersion observations of the Na I D λλ5890, 5896 and K I λλ7665, 7699 interstellar lines, and the diffuse interstellar band at 5780 A in the spectra of 32 Type Ia supernovae are used as an Probing the missing link between the diffuse interstellar bands and the total-to-selective extinction ratio $R_V\,\!-\!$ I. Extinction versus reddening • Physics Monthly Notices of the Royal Astronomical Society • 2019 The carriers of the still (mostly) unidentified diffuse interstellar bands (DIBs) have been a long-standing mystery ever since their first discovery exactly 100 yr ago. In recent years, the Constraining the size of the carrier of the λ5797.1 diffuse interstellar band • Physics • 2015 The diffuse interstellar band (DIB) at 5797.1 Å is simulated based on three premises: (1) The carrier of the DIB is polar as concluded by T. Oka et al. from the anomalous spectrum toward Herschel 36 Dense molecular clouds in the SN 2008fp host galaxy • Physics • 2014 (abridged) We use observations of interstellar absorption features, such as atomic and molecular lines as well as diffuse interstellar bands (DIBs), towards SN2008fp to study the physical properties Diffuse interstellar absorption bands • Physics • 2009 The diffuse interstellar bands (DIBs) are a large number of absorption bands that are superposed on the interstellar extinction curve and are of interstellar origin. Since the discovery of the first ## References SHOWING 1-10 OF 21 REFERENCES Diffuse interstellar bands of unprecedented strength in the line of sight towards high-mass X-ray binary 4U 1907+09 • Physics • 2005 High-resolution VLT/UVES spectra of the strongly reddened O supergiant companion to the X-ray pulsar 4U 1907+09 provide a unique opportunity to study the nature of the diffuse interstellar bands Diffuse Interstellar Bands in NGC 1448 • Physics • 2004 We present spectroscopic VLT/UVES observations of two emerging supernovae, the Type Ia SN 2001el and the Type II SN 2003hn, in the spiral galaxy NGC 1448. Our high resolution and high signal-to-noise The Detection of the Diffuse Interstellar Bands in Dusty Starburst Galaxies We report the detection of the diffuse interstellar bands (DIBs) in the optical spectra of seven far-IR-selected starburst galaxies. The λλ6283.9 and 5780.5 features are detected with equivalent Selecting damped Lyman α systems through Ca ii absorption – I. Dust depletions and reddening at z∼ 1 • Physics • 2006 We use the average E(B − V ) and Zn II column densities of a sample of z ∼ 1C aII λλ 3935, 3970 absorption-line systems selected from the fourth data release of the Sloan Digital Sky Survey (SDSS) to DUST AND DIFFUSE INTERSTELLAR BANDS IN THE z~a = 0.524 ABSORPTION SYSTEM TOWARD AO 0235+164 • Physics • 2004 The authors present new HST STIS NUV-MAMA and STIS CCD observations of the BL Lac object AO 0235+164 and the intervening damped Ly {alpha} (DLA) line at z{sub {alpha}} = 0.524. The line profile gives Some Diffuse Interstellar Bands Related to Interstellar C2 Molecules • Physics • 2003 We have investigated the correlations between the equivalent widths of 21 selected diffuse interstellar bands (DIBs) and the corresponding interstellar column densities N(C2), N(CN), and N(CH), The Diffuse Interstellar Bands The diffuse interstellar bands are absorption features observed in the spectra of stars seen through significant column densities of interstellar material. Of the 127 confirmed DIBs in the optical VLT UVES Observations of Interstellar Molecules and Diffuse Bands in the Magellanic Clouds • Physics • 2006 We discuss the abundances of interstellar CH, CH+, and CN in the Magellanic Clouds, derived from spectra of seven SMC and 13 LMC stars obtained (mostly) with the VLT UVES. CH and/or CH+ have now been A deep search for 21-cm absorption in high redshift damped Lyman-alpha systems We present deep GMRT 21-cm absorption spectra of 10 damped Lyman- systems (DLAs), of which 8 are at redshifts z> 1:3. HI absorption was detected in only one DLA, the z= 0:5318 absorber toward PKS Average extinction curves and relative abundances for quasi-stellar object absorption-line systems at 1 ≤zabs < 2 • Physics • 2006 We have studied a sample of 809 Mg II absorption systems with 1.0 ≤ z abs ≤ 1.86 in the spectra of Sloan Digital Sky Survey quasi-stellar objects (QSOs), with the aim of understanding the nature and
# Balloon Pay-off vCalc Reviewed "Balloon payment" = Tags: Rating ID vCalc.Balloon Pay-off UUID 2b566feb-77c1-11e3-84d9-bc764e202424 The Balloon Payment equation calculates the balloon payment required at the end of the loan term to settle a loan. # Description This Equation will calculate the amount due for a mortgage with a Balloon Payment, based on an initial loan amount (P), for a fixed rate interest (i) loan or mortgage, on an amortization schedule for a set number of years (n), requiring a balloon payment after a specified term of years(T).  Balloon payments are required at the end of the contract time after the borrower has paid regular periodic payments (e.g. monthly mortgage payments) for a period of time.  Balloon note limit the long term interest rate risk for the lender, by limiting the duration of the fixed interest rate (i) to the balloon note period (pay num), even though the amortization period (n) may be much longer. The total debt cost of these loans is lower than that of a conventional fixed-rate mortgage. An advantage of these loans is that they often have a lower interest rate, but the final balloon payment is substantial, and for some borrowers this can be a disadvantage. An amortization table shows, for each payment period of a loan, the payment amount that is applied to principal and  the amount paid as Interest. For a standard fixed rate mortgage,  the payment in the beginning is applied more towards the interest than to the principal.  As the loan matures, the payment amount each month applied to principal increases, and the amount paid as interest decreases. ## INPUTS • P - (Principal) original loan amount • n - (Amortization period) the number of years to payoff the loan if there were no balloon payment. • i -  (annual rate) interest rate as a percentage; i.e., enter 4.6 for a 4.6% interest rate • T- (Term) the year of the loan at the end of which the  balloon payment is due # Usage In an example, the loan amount (P) is $50,0000. The loan is computed for a term (n) of 30 years and thus has a lower payment amount as the loan is amortized over the entire 30 years. The interest rate for this loan is set at a fixed 6%. And the balloon payment (pay num) is defined to come due after 5 years. Since at the beginning of an amortization schedule the payments are paying much more interest than principal, a relatively small part of the principal is paid by year 5. And thus the balloon payment due at the end of year 5, which pays off the loan balance, is:$46826.59. # History A Balloon mortgages is a type of short-term mortgage that requires the borrower to make regular payments for a specific interval and then pay off the remaining balance. Balloon mortgages can take the form of interest-only loans or partially amortizing mortgages. This formula takes into account amortization.
# Statistics for rational points on curves of genus $g$ over $\mathbb{F}_q$, $g\gg q$ Consider the distribution of the number of $$\mathbb{F}_q$$ points as I range over smooth projective curves of genus $$g$$ (defined over $$\mathbb{F}_q$$). If $$q\gg g,$$ the Hasse-Weil bounds give me a lot of information. The other extreme case, $$g\gg q$$, came up in a conversation recently, and here as far as I can tell etale cohomology tells me nothing useful about the expected distribution of rational points. Instead, I'm tempted to conjecture that this distribution will be a Poisson distribution with mean $$q+1+\frac{1}{q}+\frac{1}{q^2}\cdots.$$ Here is a heuristic. For a random plane curve of degree $$d$$, each point in $$\mathbb{P}^2_{\mathbb{F}_q}$$ has probability $$\frac{1}{q}$$ of lying on your curve. As $$d$$ goes to infinity, these events (for different points) become independent, and so your distribution will have a generating function of $$(1+\frac{1}{q}x)^{q^2+q+1}$$ - which is a close-to-Poisson distribution of mean $$q+1+\frac{1}{q}.$$ Plane curves are quite special, so lets play the same game for an arbitrary curve. Take the canonical embedding into $$\mathbb{P}^{g-1}$$. If I assume again 1) that each point has a probability of $$\frac{1}{q^{g-2}}$$ of lying on the curve and 2) as for any fixed number $$k$$, $$k$$-tuples of these events become close to independent for large $$g$$, then the same heuristic gives my prediction above. Of course, this is a pretty fragile thought experiment, and I'm not sure that you should believe my guess. Has this question been considered anywhere in the literature, and if so, has anybody proposed (or even better, proved) an answer? • This is quite related to another MathOverflow question from some years back: mathoverflow.net/questions/187116/… – Jason Starr Nov 8 '18 at 21:00 • See in this paper that confirm your intuition and more arxiv.org/abs/1410.7373 – Vlad Matei Nov 8 '18 at 23:12 • @Vlad: An interesting paper, but somewhat puzzling. While their Conjecture 1, involving Poisson distribution, is stated for fixed $q$ and $g\to\infty$, their purported evidence for it is based on proved results for $q>g^k, g\to\infty$. But it is well known that the asymptotic behavior of the number of points for genus large compared with $q$ (e.g., the Drinfeld-Vladut bound, exploiting a strong correlation between the Frobenius eigenvalues) drastically differs from the case of small genus (say, below $\sqrt{q}$, where such restrictions are not known to arise). – Victor Protsak Nov 9 '18 at 5:20 • (cont) I guess that the last section, comparing with the matrix models, concedes this point, but lacks lucidity as to wheher, let alone why, these two regimes can exhibit similar statistical properties. – Victor Protsak Nov 9 '18 at 5:27 • @dhy: Where does the heuristic probability $\frac{1}{q^{g-2}}$ for a random point to lie on a canonical curve of genus $g$ over $\Bbb {F}_q$ come from? For a complete intersection of codimension $d$, we can pretend that the values of some $d$ defining polynomials are i.i.d. variables uniformly distributed over $\Bbb {F}_q$. But the general canonical curve is not a complete intersection except for a few small genus values. – Victor Protsak Nov 9 '18 at 6:00
## Sunday, May 15, 2011 ### Rejiggering an R Data Frame This is a new experience for me.  Mark Allen (author of the Open Source Research blog) tweeted a question about rearranging a data frame in R.  The question being longer than 140 characters, he used Deck.ly to post an extended tweet.  So I learned something new about the Twitterverse.  I don't have a TweetDeck account, though, so my best option to post a response is to put it here. Mark has a data frame with one row for each response to any of a set of questions, and three columns: respondent ID; question number; response. Here's a chunk of R code to create a small demo data frame along those lines: # create some data m <- matrix(c(1,1,11,2,3,23,1,2,12,2,1,21,2,2,22,1,3,13), nrow=6, ncol=3, byrow=TRUE) d <- data.frame(m) print(d) Created by Pretty R at inside-R.org The output is: ID Question Answer 1 1 1 11 2 2 3 23 3 1 2 12 4 2 1 21 5 2 2 22 6 1 3 13 Here is code to rearrange it: # sort the data by ID, then by Question d <- d[do.call(order,d),] # extract a list of unique IDs and Question numbers id <- unique(d[,"ID"]) q <- unique(d[,"Question"]) # rearrange the answers into the desired matrix layout m <- matrix(d[,"Answer"], nrow=length(id), ncol=length(q), byrow=TRUE) # add the ids and make a new data frame m <- cbind(id, m) dd <- data.frame(m) names(dd) <- c("ID", paste("Q", q, sep="")) print(dd) Created by Pretty R at inside-R.org The output of the last line (the rejiggered data frame) is: ID Q1 Q2 Q3 1 1 11 12 13 2 2 21 22 23 1. That's one way to do it. My recommendation instead would be to use the reshape or reshape2 packages. Here's the solution with reshape: > cast(d, ID ~ Question) ID 1 2 3 1 1 11 12 13 2 2 21 22 23 2. Thanks a lot for the solution, tweetDeck is pretty useful for tweeting. I think your solution might actually work better with large datasets. I am using a relatively large dataset and reshape bloats the memory to three times the size of my dataset. It freezes my computer. The only thing is that when there is no value for a cell we get NA in regular for-loop approach or in reshape but I guess here it will break 3. @Siah: What breaks with missing values? I changed my little example so that one of the responses was NA and the code still worked. If either the respondent ID or the question number in the original data frame is missing, bad things will happen, but I think that's true regardless of the script (it means you have an answer but you're not sure from whom or to what question). For really big data sets, I might be tempted to stuff the data into SQLite or MySQL and then query out what I wanted. 4. @Harlan: Cool! I wasn't aware of the reshape package. Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange.
A diverging lens of focal length Question: A diverging lens of focal length $20 \mathrm{~cm}$ and a converging mirror of focal length $10 \mathrm{~cm}$ are placed coaxially at a separation of $5 \mathrm{~cm}$. Where should an object be placed so that a real image is formed at the object itself? Solution:
Wire with current neutrally charged? In this video How Special Relativity Makes Magnets Work Veritasium explains phenomenologically how the Electric and Magnetic force are really just the flipside of the same coin, connected via special relativity. Now I just have a quick question concerning his statement about the wire that is neutral while a current flows through it. Essentially a wire (not moving) with no current is neutral, since the negative and positive charges balance each other. Now lets say a current is switched on, leading to a flow of negative charges with some velocity $$v$$. This decreases the spacing between the electrons, therefore it should increase the negative charge density, while the positive charge density stays the same. Effectively the wire with some current should be negatively charged to the external observer, or what? • Why do you think that the spacing between electrons decrease? For example, water flows in a pipe keeping the same density. Feb 9 '20 at 21:58 • Within special relativity? I imagine it as follows: Consider the electrons evenly spaced on the x-axis at the integers when there is no current. They are all attached to a long cord. Now the cord starts moving along the x-axis and pulls the electrons i.e. we have a current. Since the cord is moving with respect to an external observer which notices the current, the distance between the electrons should decrease by lorentz-contraction. PS: In your water example, is it still the same density when it moves close to the speed of light? Feb 9 '20 at 22:30 • I mean lets consider a box a water which moves in x-direction. Since the dimensions in x-direction decrease by a factor of $1/\gamma$, the volume decreases by that amount for the same matter inside and therefore increases the density by a factor $\gamma$. However, additionally the mass of the matter increases by a factor of $\gamma$ due to it moving, so actually the mass density of water should increase by $\gamma^2$, or? Feb 9 '20 at 22:46 • The length of the wire doesn't change, (it is at rest). The number of electrons in the wire keeps the same. The charge of the electron is invariant. So the density of negative charge doesn't change. Feb 9 '20 at 23:39 Now lets say a current is switched on, leading to a flow of negative charges with some velocity v. This decreases the spacing between the electrons, therefore it should increase the negative charge density, while the positive charge density stays the same. The analysis given by Veritasium originated from Purcell. My favorite presentation on the topic is here: http://physics.weber.edu/schroeder/mrr/MRRtalk.html Unfortunately, your question is the most common confusion from Purcell’s idea and it is not addressed. The confusion comes from the natural assumption that the electrons form a rigid body with a defined and fixed proper distance between electrons. This is not the case. The proper distance between electrons is highly variable and may be larger or smaller based on the EM fields and other charges in the environment. In practice the spacing between electrons is controlled by the voltage on the wire and the self capacitance of the wire. A strong positive voltage on the wire produces a low density of electrons in the lab frame, a neutral voltage produces a density of electrons equal to the proton density in the lab frame, and a strong negative voltage produces a high density of electrons in the lab frame. Once the density is determined in the lab frame, then the proper distance can be determined. The normal rules of relativity apply, so the density in the electron’s frame will be lower than in the lab frame. So the lab frame spacing undergoes length contraction, as normal, but from a proper distance that produces the observed charge density in the lab frame. • Why is the density of electrons in the lab frame small when the voltage is positive and high when the voltage is negative? Afterall positive and negative voltage just changes the direction where the electrons move and the situation should be symmetric!? Feb 14 '20 at 16:00 • @Diger When the voltage is positive then the wire has a positive net charge. Since the distance between the positive charges is fixed, a net positive charge means there is a low density of negative charges. When the voltage is negative then the wire has a negative net charge and hence a high density of negative charges. I am not sure why you think the two situations should be symmetric. Can you explain? – Dale Feb 14 '20 at 16:50 • Maybe I missunderstand what you actually mean but "voltage". I was thinking about a wire where you apply a voltage at both ends, but from your explanation it sounds more like you talk about the voltage of the electric field that is generated by the wire due to positive/negative charges (charge imbalance experienced by the observer in the labframe) inside the wire. Feb 14 '20 at 16:54 • @Diger I am talking about applying a voltage to both ends also. I am not sure where the confusion is. If we have a wire where the left end is at +1000 V and the right end is also at +1000 V (wrt ground) then there will be no current through the wire but a net positive charge on the wire (low density of electrons). – Dale Feb 14 '20 at 17:00
$\ce{R_2C=CR_2 + X_2 \rightarrow R_2CX-CR_2X} \tag{8.2.1}$. Mechanism and stereochemistry of halogenation. This creates a dipolar moment in the halogen molecule bond. Possibly the most interesting feature of this reaction is that the products follow a very predictable stereochemical pattern. Step 1: In the first step of the addition the Br-Br bond polarizes, heterolytic cleavage occurs and Br with the positive charge forms a intermediate cycle with the double bond. This interaction delocalizes the positive charge on the intermediate and blocks halide ion attack from the syn-location. stream The reaction of the addition is not regioselective but stereoselective.Stereochemistry of this addition can be explained by the mechanism of the reaction.In the first step electrophilic halogen with a positive charge approaches  the double carbon bond and 2 p orbitals of the halogen, bond with two carbon atoms and create a cyclic ion  with a halogen as the intermediate step. Step 2: In the second step, bromide anion attacks either carbon of the bridged bromonium ion from the back side of the ring. 1. Halogens that are commonly used in this type of the reaction are: $Br$ and $Cl$. The halogenation of alkenes is carried out in a neutral organic solvent such as carbon tetrachloride (CCl 4 or dichloromethane, DCM (CH 2 Cl 2) that cannot act as a nucleophile when the halonium ion is formed. Additional evidence in support of the bromonium ion mechanism comes from the results obtained when an alkene (such as cyclopentene) reacts with bromine in the presence of sodium chloride (see Figure 8.2: Reaction of an alkene with bromine in the presence of sodium chloride, below). 1.What is the mechanism of adding Cl2 to the cyclohexene? identify the conditions under which an addition reaction occurs between an alkene and chlorine or bromine. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. What is the relationship between the two products? ?��xƟ���u� �����P�[�Ӗ|�. Because halogen with negative charge can attack any  carbon from the opposite side of the cycle it creates a mixture of steric products.Optically inactive starting material produce optically inactive achiral products (meso) or a racemic mixture. If, however, the original alkene structure possesses restricted rotation due to a factor other than a double bond, a trans‐addition product can be isolated. After completing this section, you should be able to. Additional evidence in support of the bromonium ion mechanism comes from the results obtained when an alkene (such as cyclopentene) reacts with bromine in the presence of sodium chloride (see Figure 8.2: Reaction of an alkene with bromine in the presence of sodium chloride, below). Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. The reason for this is that the bromonium ion blocks access to the carbon atoms along an entire side, due to bond formation with the two carbon atoms. Before constructing the mechanism let us summarize conditions for this reaction. halogens to alkenes. Such blocking is referred to as steric hindrance. Dr. Dietmar Kennepohl FCIC (Professor of Chemistry, Athabasca University), Prof. Steven Farmer (Sonoma State University). Alkenes Hydrohalogenation. Make certain that you can define, and use in context, the key terms below. Electrophilic addition mechanism consists of two steps. Mechanism and stereochemistry of halogenation. In fact, the evidence that a bromonium ion was involved in alkene addition came from studies of the reaction stereochemistry. write the mechanism for the addition reaction that occurs between an alkene and chlorine or bromine, and account for the stereochemistry of the product. Once formed, the bromonium ion is susceptible to attack by two nucleophiles—chloride ion and bromide ion—and, in fact, a mixture of two products (both produced by anti attack) is formed. Halogenation is the addition of halogen atoms to a π‐bond system. The goal of this video is to help you understand rather than memorize concepts related to the halogenation mechanism. In thermodynamical terms $$I$$ is too slow for this reaction because of the size of its atom, and $$F$$ is too vigorous and explosive. The ring opens up and two halogens are have anti stereochemistry. Legal. Make certain that you can define, and use in context, the key terms below. 10.14: Stereochemistry of Halogenation Last updated; Save as PDF Page ID 30456; Contributors; The halogens chlorine and bromine add rapidly to a wide variety of alkenes without inducing the kinds of structural rearrangements (carbocation shifts) noted for strong acids - this is because a discreet carbocation intermediate does not form in these reactions. Once formed, the bromonium ion is susceptible to attack by two nucleophiles—chloride ion and bromide ion—and, in fact, a mixture of two products (both produced by anti attack) is formed. The two-step mechanism shown in the LibreText pages gives you an idea of how the reaction between an alkene and a halogen occurs. Stereochemistry of this addition can be explained by the mechanism of the reaction. A common test is the decolourization of a reddish-brown bromine solution by an alkene. The positive charge is delocalized over all the atoms of the ring, but should be concentrated at the more substituted carbon (where positive charge is more stable), and this is the site to which the nucleophile will bond. This intermediate is more stable than the corresponding linear carbocation because all the atoms have a complete octet of electrons. For more information contact us at [email protected] or check out our status page at https://status.libretexts.org. We will use Br2 in our example for halogenation of ethylene. Halogens can act as electrophiles to attack a double bond in alkene. The bromoethyl carbocation that forms mid reaction in this example is often internally stabilized by cyclization into a three‐membered ring containing a positively charged bromine atom (bromonium ion). The stabilization provided by the halogen-carbocation bonding makes rearrangement unlikely, and in a few cases three-membered cyclic halonium cations have been isolated and identified as true intermediates. 1. Halogenation of Alkenes – Organic Chemistry Reaction Mechanism November 18, 2013 By Leah4sci 5 Comments Reaction Overview: The alkene halogenation reaction, specifically bromination or chlorination, is one in which a dihalide such as Cl2 or Br2 is added to a molecule after breaking the carbon to carbon double bond. write the mechanism for the addition reaction that occurs between an alkene and chlorine or bromine, and account for the stereochemistry of the product. Heterolytic bond cleavage occurs and one of the halogens obtains positive charge and reacts as an electrophile. $\ce{R_2C=CR_2 + X_2 \rightarrow R_2CX-CR_2X} \tag{8.2.1}$ Step 1: In the first step of the addition the Br-Br bond polarizes, heterolytic cleavage occurs and Br with the positive charge forms a cyclic intermediate with the two carbons from the alkene. write the mechanism for the addition reaction that occurs between an alkene and chlorine or bromine, and account for the stereochemistry of the product. CliffsNotes study guides are written by real teachers and professors, so no matter what you're studying, CliffsNotes can ease your homework headaches and help you score high on exams. We can account both for the high stereoselectivity and the lack of rearrangement in these reactions by proposing a stabilizing interaction between the developing carbocation center and the electron rich halogen atom on the adjacent carbon. Heterolytic bond cleavage occurs and one of the halogens obtains a positive charge and reacts as an electrophile. Additional evidence in support of the bromonium ion mechanism comes from the results obtained when an alkene (such as cyclopentene) reacts with bromine in the presence of sodium chloride (see Figure 8.2: Reaction of an alkene with bromine in the presence of sodium chloride, below). As halogen molecule, for example Br2, approaches a double bond of the alkene, electrons in the double bond  repel electrons in bromine molecule causing polarization of the halogen bond. Step 1: In the first step of the addition the Br-Br bond polarizes, heterolytic cleavage occurs and Br with the positive charge forms a intermediate cycle with the double bond. Before constructing the mechanism let us summarize conditions for this reaction. Predict the products for 1,2-dimethylcyclpentene reacting with HCl, give the proper stereochemistry. We will use Br2 in our example for halogenation of ethylene. write the equation for the reaction of chlorine or bromine with a given alkene. The two-step mechanism shown in the LibreText pages gives you an idea of how the reaction between an alkene and a halogen occurs.
## Declarative (Auckland) GUI Layout From: "andrew cooke" <andrew@...> Date: Thu, 19 Feb 2009 07:54:57 -0300 (CLST) "The Auckland Layout Model (ALM) is a novel technique for specifying 2D layout as it is used for arranging the controls in a GUI. The model allows the specification of constraints based on linear algebra, and an optimal layout is calculated using linear programming. Linear equalities and inequalities can be specified on horizontal and vertical tabstops, which are virtual lines that form a grid to which all the elements of the GUI are aligned." http://aucklandlayout.sourceforge.net/ (Ugly) examples http://aucklandlayout.sourceforge.net/examples/index.html .NET, Java and Haiku support (what is Haiku?) Via http://lambda-the-ultimate.org/node/3070 which contains more ideas. Andrew PS LEPL update here - http://groups.google.com/group/lepl/browse_thread/thread/fa687eb737cbbde6#
# Why is $\lim_{x \to \infty} (\sqrt{9x^2+x}-3x)=\frac{1}{6}$? My exercise book and Wolfram Alpha give: $$\lim\limits_{x\to\infty}(\sqrt{9x^2+x}-3x)=\frac{1}{6}$$ When I work it out I get 0: $$(\lim\limits_{x\to\infty}x\sqrt{9\frac{x^2}{x^2}+\frac{x}{x^2}}-\lim\limits_{x\to\infty}3x)$$ $$(\lim\limits_{x\to\infty}x*\sqrt{\lim\limits_{x\to\infty}9+\lim\limits_{x\to\infty}\frac{1}{x}}-\lim\limits_{x\to\infty}3x)$$ $$(\lim\limits_{x\to\infty}x*\sqrt{9+0}-\lim\limits_{x\to\infty}3x)$$ $$(3\lim\limits_{x\to\infty}x-3\lim\limits_{x\to\infty}x)$$ $$=0$$ Where am I going wrong? • The limit as $x\to\infty$ of $x$ does not exist. To start again, multiply top and bottom by $\sqrt{9x^2+x}+3x$. – André Nicolas Oct 28 '15 at 23:21 • You can write $\lim f(x)-g(x)=\lim f(x)-\lim g(x)$ only if these limits are finite. This is not the case here. – Bernard Oct 28 '15 at 23:24 • Thanks Andre & Bernard I understand where I went wrong now. – Brendan Hill Oct 28 '15 at 23:39 • – Martin Sleziak Aug 16 '17 at 15:45 • Next time, use conjugates. They are a great way of solving problems. For example, simplify $$\frac{\sqrt{7} + \sqrt{6}}{\sqrt{7}}.$$ Most people would write $$1+\sqrt{\frac 67}$$ but if since for all $n$, one has $n = n\times 1$, then we can find the conjugate of $\sqrt{7} - \sqrt{6}$, namely $\sqrt{7} + \sqrt{6}$ and now, we carry out as follows: $$\frac{\sqrt{7} - \sqrt{6}}{\sqrt{7}} = \frac{\sqrt{7} - \sqrt{6}}{\sqrt{7}}\times 1 = \frac{\sqrt{7} - \sqrt{6}}{\sqrt{7}}\times \frac{\sqrt{7} + \sqrt{6}}{\sqrt{7} + \sqrt{6}} =\cdots$$ and you know where it goes from there. (This was an e.g.) – Mr Pie Feb 14 '18 at 12:25 $$\lim\limits_{x\to\infty}(\sqrt{9x^2+x}-3x)\frac{\sqrt{9x^2+x}+3x}{\sqrt{9x^2+x}+3x} = \lim\limits_{x\to\infty} \frac{(9x^2+x)-9x^2}{\sqrt{9x^2+x}+3x} = \lim\limits_{x\to\infty} \frac{x}{\sqrt{9x^2+x}+3x}= \lim\limits_{x\to\infty}\frac{1}{\sqrt{9+1/x}+3} = \frac{1}{6}$$ Although this doesn't address the specific question on why the procedure in the OP is flawed, I thought it might be instructive to present an approach using a powerful general method. To that end, we proceed. One approach is to use the Generalized Binomial Theorem and expand the square root as \begin{align} \sqrt{9x^2+x}&=3x\left(1+\frac{1}{9x}\right)^{1/2}\\\\ &=3x\left(1+\frac{1}{18x}+O(x^{-2})\right) \end{align} Then, we have $$\sqrt{9x^2+x}-3x=\frac{1}{6}+O(x^{-1})$$ Taking the limit, we obtain the expected result! • This is the method I would have used, even though the other is more efficient. – Lubin Oct 28 '15 at 23:49 • @Lubin Efficiency is the eyes of the beholder. This approach herein looks fairly efficient to me. But, thank you for the comment! Very much appreciated. – Mark Viola Oct 28 '15 at 23:51 • The series approach is after a while the "natural" way. The main advantage of "rationalizing the numerator" is that it uses ideas that come earlier in standard curricula. – André Nicolas Oct 29 '15 at 0:20 • @AndréNicolas Yes; I agree. Please read the preamble to the development. "I thought it might be instructive to present ... – Mark Viola Oct 29 '15 at 0:27 This is one of the most common mistakes made by beginners in calculus and I have talked often about it in my blog as well as here. The rules of "algebra of limits" allow us to evaluate the limit of a complicated expression in terms of the limits of its sub-expressions (or parts), but there are certain restrictions. One can replace a sub-expression with its limit only in the following two cases: • The sub-expression has a finite limit and it is related to the rest of the expression in an additive manner (or we say that the sub-expression is a term in the whole expression). • The sub-expression has a finite non-zero limit and it is related to the rest of the expression in a multiplicative manner (or we say that the sub-expression or its reciprocal is a factor of the whole expression). Most people believe that the existence of limits of all sub-expressions is a must, but it is not the case. We need only the existence of limit of the part which is going to be replaced and need not worry about the limit of rest of expression. In the current scenario we have multiple mistakes. The first one is at the first step. The expression $(\sqrt{9x^{2} + x} - 3x)$ has a sub-expression $3x$ which acts as a term. However the unfortunate part is that its limit does not exist and hence it can't be replaced by $\lim_{x \to \infty}3x$. However we can continue to keep this term intact and write $$\lim_{x \to \infty}\left(x\sqrt{9 + \frac{1}{x}} - 3x\right)$$ The next fundamental mistake is trying to replace the the expression $1/x$ with its limit $0$. This is because this expression $(1/x)$ is neither a term nor a factor of the whole expression. A student may think of another approach to write the expression as $$\lim_{x \to \infty}x\left(\sqrt{9 + \frac{1}{x}} - 3\right)$$ and then note that the sub-expression $\left(\sqrt{9 + \dfrac{1}{x}} - 3\right)$ is a factor of the whole expression. Unfortunately the limit of this part is $0$ and in case of factors we do need the limit to be non-zero. Hence this approach also fails. If we start with a different question like $$\lim_{x \to \infty}(\sqrt{9x^{2} + x} - 4x) = \lim_{x \to \infty}x\left(\sqrt{9 + \dfrac{1}{x}} - 4\right)$$ then we can replace $\left(\sqrt{9 + \dfrac{1}{x}} - 4\right)$ with its limit $-1$ (non-zero) and get $\lim_{x \to \infty}-x$ and we can correctly say that $(\sqrt{9x^{2} + x} - 4x) \to -\infty$ as $x \to \infty$. The above example illustrates that it is safe to replace a sub-expression with its limit under the two circumstances (a term or a factor) mentioned earlier without worrying about the limit of remaining part of the expression. Your expression is the same as \begin{align} \lim_{x\to\infty}\left[\sqrt{\left(3x+\frac16\right)^2-\frac{1}{36}}-\left(3x+\frac16\right)+\frac{1}{6}\right] &=\lim_{z\to\infty}\left[\sqrt{z^2-\frac{1}{36}}-z+\frac{1}{6}\right]\\ &=\lim_{z\to\infty}\left[\sqrt{z^2-\frac{1}{36}}-z\right]+\frac{1}{6} \end{align} Now you can either formally rationalize the numerator as in the accepted answer, or informally see that the last limit is \begin{align} \lim_{z\to\infty}\left[\sqrt{z^2-\frac{1}{36}}-z\right]+\frac{1}{6} &\sim\lim_{z\to\infty}\left[\sqrt{z^2}-z\right]+\frac{1}{6}=\frac16 \end{align}
Technical Support Forums and Mailing Lists Search Mackichan Web SWP & SW Version 5/5.5 Installation Word Processing Computations Typesetting File Issues Graphics Exam Builder Style Editor Other Technical Articles Scientific Notebook Support Information Troubleshooting TeXnology BibDB Document 336 ## Adding a LaTeX package to SWP/SW: a closer look at some details Version: 3.0, 3.5, 3.51, & 4.x - Scientific WorkPlace & Scientific Word Following is another example of adding a LaTeX package to SWP/SW. This example uses the extsizes package CTAN: the Comprehensive TeX Archive Network and follows the steps in the article Using shells and typesetting specifications from outside sources. The standard Latex classes (article, report, etc) support 10-, 11-, and 12-point text. These are the most commonly used sizes in publishing. However, for certain applications there may be a need for other sizes. The extsizes classes provide support for sizes 8-, 9-, 10-, 11-, 12-, 14-, 17- and 20-point type. These instructions assume you have installed SW in the directory c:\swp35. Adjust this path as necessary for your installation. #### Step 1. Locate the LaTeX package files. In this case, the files are on CTAN in the directory macros/latex/contrib/extsizes/. #### Step 2. Move the LaTeX package files so they will be available to TrueTeX. The new directory c:\swp35\TCITeX\TeX\LaTeX\contrib\other\extsizes was created and all of the files from CTAN were placed in this directory. #### Step 3. Complete any further installation steps required for the LaTeX package. No other steps are needed in this case. #### Step 4. Test the sample documents included with the package distribution. A sample file is not included with this package distribution. This package implements extra base point size options for the LaTeX documentclasses article, report, book, letter, and proc. SWP and SW do not directly support the letter documentclass, so it is ignored. The following four sample documents were created to test the installation of the package: • Sample document 1 %testextarticle.tex \documentclass[14pt]{extarticle} \begin{document} This is just some harmless text followed by some math: $$\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}$$ \end{document} • Sample document 2 %testextreport.tex \documentclass[17pt]{extreport} \begin{document} This is just some harmless text followed by some math: $$\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}$$ \end{document} • Sample document 3 %testextbook.tex documentclass[20pt]{extbook} \begin{document} This is just some harmless text followed by some math: $$\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}$$ \end{document} • Sample document 4 %testextproc.tex \documentclass[9pt]{extproc} \begin{document} This is just some harmless text followed by some math: $$\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}$$ \end{document} Each sample document was placed in the directory c:\swp35\docs and given the name indicated at the beginning of each sample. A different base point size was selected for each sample document. Each sample document was compiled using the TrueTeX Formatter started from the Windows Start Menu. When using LaTeX to compile the sample documents, a LaTeX error will be generated if the latex_ml format file is used. This format file uses the dc font family, but the package apparently implements the ec font family. If the latex format is selected, the sample documents should compile without errors. This change can be made by selecting Typeset /Expert Settings/Format Settings, and choosing TrueTeX from the drop down list box. #### Step 5. Try opening the sample document in SWP/SW. In SWP or SW, select File /Open, highlight the four sample documents, and select Open. A message appears as each document is opened warning that the document may not open correctly. This is caused by the absence of a .cst file for the new documentclasses used by the sample documents. At this point, select Yes in answer to the question "Do you wish to continue?" and each sample document is successfully opened. Instructions for creating the .cst files are in the next step. Save each sample document with a new name. Preview the document from inside SWP or SW and compare with the previous results. Remember, the latex_ml format file cannot be used with these documentclasses. In SWP or SW, select Typeset/Expert Settings and choose the Format Settings tab. Change TrueTeX MultiLingual to TrueTeX to change from using the latex_ml to the latex format file. Compare the preview of the sample documents saved using SWP or SW with the preview of the original sample documents. The previews should look the same. #### Step 6. If needed, create a screen style (.cst) file. When SWP or SW opens a document, it looks in several placed for a .cst file used when formatting the document for the screen. If it fails to find an appropriate .cst file, a warning message noted in the previous step is displayed. Since each documentclass implemented by this package is based on a similar standard LaTeX documentclass, an existing .cst file can be used, but it must be copied to a location needed by SWP or SW. Create the following directories and copy and rename: 1. extarticle: Create the directory c:\swp35\styles\extarticle. Copy the file c:\swp35\styles\article\article.cst to this new directory and rename the file so it becomes c:\swp35\styles\extarticle\extarticle.cst. 2. extreport: Create the directory c:\swp35\styles\extreport. Copy the file c:\swp35\styles\report\report.cst to this new directory and rename the file so it becomes c:\swp35\styles\extreport\extreport.cst. 3. extbook: Create the directory c:\swp35\styles\extbook. Copy the file c:\swp35\styles\book\book.cst to this new directory and rename the file so it becomes c:\swp35\styles\extbook\extbook.cst. 4. extproc: Create the directory c:\swp35\styles\extproc. Copy the file c:\swp35\styles\article\article.cst to this new directory and rename the file so it becomes c:\swp35\styles\extproc\extproc.cst. Notice that article.cst file is used for extproc since the proc documentclass is not included with SW. #### Step 7. As necessary, make modifications to the .cst file. No modifications are needed to any of the .cst files used in the previous step. At this point, the sample documents can now be opened without a warning message. #### Step 8. Create a new shell document. The shell documents provided with SWP and SW typically contain many of the various document elements, but this is not strictly necessary. You can use the sample documents from step 5 above and save each sample document using the Shell (*.shl) file type in an appropriate directory, perhaps c:\swp35\Shells\Articles. SWP and SW use the file c:\swp35\Typeset\classes.opt to show the available settings for a document class. Use an ASCII editor to open c:\swp35\Typeset\classes.opt and add the following: [extarticle] 1=Body text point size 1.1=8pt,8pt 1.2=9pt,9pt 1.3=10pt - default, 1.4=11pt,11pt 1.5=12pt,12pt 1.6=14pt,14pt 1.7=17pt,17pt 1.8=20pt,20pt 2=Paper size 2.1=8.5x11 - default,letterpaper 2.2=a4,a4paper 2.3=a5,a5paper 2.4=b5,b5paper 2.5=Legal size,legalpaper 2.6=Executive size,executivepaper 3=Orientation 3.1=Portrait - default, 3.2=Landscape,landscape 4=Print side 4.1=Print one side - default,oneside 4.2=Print both sides,twoside 5=Quality 5.1=Final - default,final 5.2=Draft,draft 6=Title page 6.1=Title page,titlepage 6.2=No title page,notitlepage 7=Columns 7.1=One column - default,onecolumn 7.2=Two columns,twocolumn 8=Equation numbering 8.1=Numbers on left,leqno 8.2=Numbers on right - default, 9=Displayed equations 9.1=Centered - default, 9.2=Flush left,fleqn 10=Bibliography style 10.1=Compressed - default, 10.2=Open,openbib [extreport] 1=Body text point size 1.1=8pt,8pt 1.2=9pt,9pt 1.3=10pt - default, 1.4=11pt,11pt 1.5=12pt,12pt 1.6=14pt,14pt 1.7=17pt,17pt 1.8=20pt,20pt 2=Paper size 2.1=8.5x11 - default,letterpaper 2.2=a4,a4paper 2.3=a5,a5paper 2.4=b5,b5paper 2.5=Legal size,legalpaper 2.6=Executive size,executivepaper 3=Orientation 3.1=Portrait - default, 3.2=Landscape,landscape 4=Print side 4.1=Print one side - default,oneside 4.2=Print both sides,twoside 5=Quality 5.1=Final - default,final 5.2=Draft,draft 6=Title page 6.1=Title page,titlepage 6.2=No title page,notitlepage 7=Columns 7.1=One column - default,onecolumn 7.2=Two columns,twocolumn 8=Start chapter on left 8.1=No,openright 8.2=Yes - default,openany 9=Equation numbering 9.1=Numbers on left,leqno 9.2=Numbers on right - default, 10=Displayed equations 10.1=Centered - default, 10.2=Flush left,fleqn 11=Bibliography style 11.1=Compressed - default, 11.2=Open,openbib [extbook] 1=Body text point size 1.1=8pt,8pt 1.2=9pt,9pt 1.3=10pt - default, 1.4=11pt,11pt 1.5=12pt,12pt 1.6=14pt,14pt 1.7=17pt,17pt 1.8=20pt,20pt 2=Paper size 2.1=8.5x11 - default,letterpaper 2.2=a4,a4paper 2.3=a5,a5paper 2.4=b5,b5paper 2.5=Legal size,legalpaper 2.6=Executive size,executivepaper 3=Orientation 3.1=Portrait - default, 3.2=Landscape,landscape 4=Print side 4.1=Print one side,oneside 4.2=Print both sides - default,twoside 5=Quality 5.1=Final - default,final 5.2=Draft,draft 6=Title page 6.1=Title page,titlepage 6.2=No title page,notitlepage 7=Columns 7.1=One column - default,onecolumn 7.2=Two columns,twocolumn 8=Start chapter on left 8.1=No - default,openright 8.2=Yes,openany 9=Equation numbering 9.1=Numbers on left,leqno 9.2=Numbers on right - default, 10=Displayed equations 10.1=Centered - default, 10.2=Flush left,fleqn 11=Open bibliography style 11.1=Open bibliography,openbib 11.2=Closed bibliography - default, [extproc] 1=Body text point size 1.1=8pt,8pt 1.2=9pt,9pt 1.3=10pt - default, 1.4=11pt,11pt 1.5=12pt,12pt 1.6=14pt,14pt 1.7=17pt,17pt 1.8=20pt,20pt 2=Paper size 2.1=8.5x11 - default,letterpaper 2.2=a4,a4paper 2.3=a5,a5paper 2.4=b5,b5paper 2.5=Legal size,legalpaper 2.6=Executive size,executivepaper 3=Orientation 3.1=Portrait - default, 3.2=Landscape,landscape 4=Print side 4.1=Print one side - default,oneside 4.2=Print both sides,twoside 5=Quality 5.1=Final - default,final 5.2=Draft,draft 6=Title page 6.1=Title page,titlepage 6.2=No title page,notitlepage 7=Columns 7.1=One column - default,onecolumn 7.2=Two columns,twocolumn 8=Equation numbering 8.1=Numbers on left,leqno 8.2=Numbers on right - default, 9=Displayed equations 9.1=Centered - default, 9.2=Flush left,fleqn 10=Bibliography style 10.1=Compressed - default, 10.2=Open,openbib These settings offer the same options as the standard documentclass options, except the extra size options are added. With the above addition to classes.opt, use the menu Typeset /Options and Packages and choose the Class Options tab and the Modify button to select the available class options.
### Home > PC3 > Chapter Ch8 > Lesson 8.3.2 > Problem8-106 8-106. Given $\vec { \text{m} } = \langle - 7 , - 1 \rangle$ and $\vec { \text{n} } = \langle 5,2 \rangle$, calculate $\arg\left(\vec{\text{p}}\right)$ and $|| \vec { \text{p} } | |$ where $\vec { \text{p} } = 6 \vec { \text{m} } - 8 \vec { \text{n} }$. Use the Pythagorean Theorem to determine the length of vector $\text{p}$. $\arg\left(\vec{\text{p}}\right)$ is the direction. Be sure to consider which quadrant the vector is in when giving your final answer.
## Essential University Physics: Volume 1 (3rd Edition) a) Since pressure and density are directly proportional, we use the result from the previous problem to find: $\rho = \rho_0e^{\frac{-h}{h_0}}$ b) We know that the mass at a certain height in the atmosphere is given by: $M = \int_0^h dm$ We need this integral to be in terms of h so that we can find the desired height, so we obtain: $dm = \rho dV = (R_e+h)^2 e^{\frac{-h}{h_0}} \rho_0 dh$ (Note, above we substitute the equation from part a in for the value of $\rho$). Taking the integral of this quantity and plugging in $\frac{1}{2}M$ for the value of M on the left side of the equation, we obtain: $\frac{1}{2}=|1-e^{\frac{-h}{h_0}}|$ $h=|h_0ln(\frac{1}{2})|$ $|8,200ln(\frac{1}{2})|=\fbox{5,683 meters}$
#### Thank you for registering. One of our academic counsellors will contact you within 1 working day. Please check your email for login details. Click to Chat 1800-5470-145 +91 7353221155 CART 0 • 0 MY CART (5) Use Coupon: CART20 and get 20% off on all online Study Material ITEM DETAILS MRP DISCOUNT FINAL PRICE Total Price: Rs. There are no items in this cart. Continue Shopping # how to solve probleems related to two bodies connected to a spring by using conservation of linear momentum and energy.how to use reduced mass concept? Grade:12 ## 2 Answers SAGAR SINGH - IIT DELHI 879 Points 10 years ago Dear student, In such questions apply 2 concepts: Conservation of energy Conservation of momentum Pramod J AskiitiansExpert-IIT-B 36 Points 10 years ago Dear student, Given two bodies, one with mass $m_{1}\!\,$ and the other with mass $m_{2}\!\,$, they will orbit the barycenter of the two bodies. The equivalent one-body problem, with the position of one body with respect to the other as the unknown, is that of a single body of mass $m_\text{red} = \mu = \cfrac{1}{\cfrac{1}{m_1}+\cfrac{1}{m_2}} = \cfrac{m_1 m_2}{m_1 + m_2},\!\,$ The relative acceleration between the two bodies is given by $a= a_1-a_2 = \left({1+{m_1 \over m_2}}\right) a_1 = {{m_2+m_1}\over{m_1 m_2}} m_1 a_1 = {F_{12} \over m_\text{red}}.$ ## ASK QUESTION Get your questions answered by the expert for free
# log-log Plot of Covid-19 Using Plotly From time to time I check on the Covid-19 trends using the log-log diagram. This plot is characterized with total number of cases shown on the X axis, and number of new confirmed cases in the past week shown on the Y axis. In order to reproduce this plot I will be using plotly. ### Data and Preparation The data is sourced from: I am especially interested in plotting data for the city I live in, in order to get the more clear picture. The raw data is a list of numbers, representing new daily cases on the given date: 10.03.2020. 2 11.03.2020. 3 However, the data obtained is missing values from 30.3.2020. to 15.4.2020. Since this is not a scientific paper, I took liberty of filling in the missing data using the growth factor. The growth factor represents the rate at which new cases progress. It is calculated by dividing number of new cases of the current day with the number of new cases the day before. However, the growth factor data is available for the whole country, so the interpolated data will not be precise. ### The Fun Part My data is stored in CSV, and the easiest way to manipulate csv file is with pandas. It is recommended to install pandas in your wirtual environment using pip: pip install pandas From here it is easy to obtain daily new cases from the CSV: import pandas as pd if __name__ == '__main__': Now, we are two steps removed from the plot. For the X axis, we need to calculate the total number of cases for each day since the virus was first detected in the city. This is the cumulative sum. That can be calcuated easy using the list comprehension. def get_daily_total(data): return [t + sum(data[:i]) for i, t in enumerate(data)] In other words, each day is the sum of the number of new cases that day and all of the previous days. The Y axis is represented with the similar cumulative sum, but with the seven days window. def get_seven_past_days_total(data): # This could also be a list comprehension # (the ugly one though). But this is left as # an exercise to the reader ;) seven_days_running_total = [] for i, today in enumerate(data): if i > 7: seven_days_running_total.append(today + sum(data[i-7:i])) else: seven_days_running_total.append(today + sum(data[:i])) return seven_days_running_total Each number in the resulting list is the sum of new cases in the last seven days. Now we have all the numbers we need to make the plot: if __name__ == '__main__': But before that, you will need to install plotly: pip install plotly Now we can plot the log-log diagram: def plot(x, y, title): fig = go.Figure() # We will use Scatter plot with # log-log axes fig.update_layout( xaxis_type='log', yaxis_type='log', xaxis_title='Total Confirmed Cases', yaxis_title='New Confirmed Cases in the last 7 days', title_text=title, font=dict( family='Courier New, monospace', size=18, color='#7f7f7f' ) ) fig.show() Finally, we have: if __name__ == '__main__':
For the latest information on COVID-19 visit sacoronavirus.co.za # Find a Residential property for sale in Nigel Finding a Residential property using the map is faster and recommended! To search for properties using the map click here ## View all property for sale in Nigel View Nigel suburbs starting with: #### alra park alra park has approximately 1 properties for sale. The suburb has a total area of approximately 0.956505 km2. #### dunnottar dunnottar has approximately 11 properties for sale. The suburb has a total area of approximately 3.188587 km2.  Estimated average price for listed properties in this area: R 991 181. #### ferryvale ferryvale has approximately 23 properties for sale. The suburb has a total area of approximately 2.227734 km2.  Estimated average price for listed properties in this area: R 2 396 347. #### glenverloch glenverloch has approximately 1 properties for sale. The suburb has a total area of approximately 0.3031656 km2. #### greater nigel greater nigel has approximately 4 properties for sale. The suburb has a total area of approximately 89.03364 km2. #### laversburg laversburg has approximately 2 properties for sale. The suburb has a total area of approximately 0.3383951 km2. #### nigel nigel has approximately 9 properties for sale. The suburb has a total area of approximately 2.630572 km2. #### noycedale noycedale has approximately 1 properties for sale. The suburb has a total area of approximately 1.791175 km2. pretoriusstad has approximately 9 properties for sale. The suburb has a total area of approximately 1.567774 km2. #### sharon park sharon park has approximately 14 properties for sale. The suburb has a total area of approximately 5.236104 km2.  Estimated average price for listed properties in this area: R 1 336 785. #### visagie park visagie park has approximately 8 properties for sale. The suburb has a total area of approximately 2.50151 km2.
According to an article published Jan. 9 in the journal Nature Communications, researchers from the University of California, Los Angeles (UCLA) employed nanotechnology to achieve a crack-free arc weld of 7075 aluminum alloy. The research team fabricated an AA7075 filler rod with 1.7 vol-% titanium carbide (TiC) nanoparticles measuring 40 to 60 nm. They tested the new filler rod against conventional AA7075 filler and ER5356 filler during gas tungsten arc welding (GTAW) of two 3.175-mm-thick AA7075 sheets under identical parameters. The nanotreated filler yielded an even weld bead with no signs of cracking, while the conventional fillers exhibited cracks in the bead’s melting zones. Lightweight yet strong alloys are especially critical to transportation applications. Reducing a vehicle’s weight can drastically reduce its fuel consumption as well as emissions. For this reason, automotive manufacturers have experimented with aluminum chassis components for decades. Ford’s thirteenth-generation F-series trucks, produced since 2015, replaced many steel body panels with aluminum variants and shaved an average of 600 lb off the vehicles. Widespread adoption of strong, lightweight alloys in the automotive industry is largely driven by the alloy’s weldability. AA7075 has an excellent strength-to-weight ratio, but the alloy has long been considered unweldable due to its susceptibility to hot tearing. When used in the aerospace industry, AA7075 is typically joined using rivets or bolts, and more recently friction stir welding (FSW) has successfully welded the alloy. However, due to FSW’s difficulty with complicated welds and difficult-to-access spaces, arc welding is highly desirable for joining AA7075. Figure 1: A comparison of the weld beads produced in the study. Conventional AA7075 and ER5356 fillers both yielded macroscopic cracks, but a nanotreated AA7075 filler rod produced an even bead with no cracks. Image source: Nature Communications / CC BY 4.0 The UCLA researchers produced arc-weld joints with a tensile strength up to 392 megapascals (Mpa). Like with other nanotechnology-based postwelding heat treatments, tensile strength increases up to 551 Mpa, 96% of the wrought material’s property, which is comparable to many steels. The researchers found that the TiC nanoparticles modified AA7075’s alpha grain and secondary phase morphologies to produce a strong, crack-free fusion joint. The UCLA team was not the first to employ a nanoparticle-enhanced filler to solve a difficult-to-weld alloy. In a 2013 article, a University of Wisconsin research team used a filler enhanced with aluminum-oxide nanoparticles to arc-weld A206 aluminum-copper alloy, which has a susceptibility to hot tearing similar to that of AA7075. That study found that the nanoparticle filler drastically improved A206’s hot-tearing resistance, much more so than traditional grain-enhancement techniques.
# 2.6 Solve a formula for a specific variable  (Page 2/4) Page 4 / 4 Aurelia is driving from Miami to Orlando at a rate of 65 miles per hour. The distance is 235 miles. To the nearest tenth of an hour, how long will the trip take? Kareem wants to ride his bike from St. Louis to Champaign, Illinois. The distance is 180 miles. If he rides at a steady rate of 16 miles per hour, how many hours will the trip take? 11.25 hours Javier is driving to Bangor, 240 miles away. If he needs to be in Bangor in 4 hours, at what rate does he need to drive? Alejandra is driving to Cincinnati, 450 miles away. If she wants to be there in 6 hours, at what rate does she need to drive? 75 mph Aisha took the train from Spokane to Seattle. The distance is 280 miles and the trip took 3.5 hours. What was the speed of the train? Philip got a ride with a friend from Denver to Las Vegas, a distance of 750 miles. If the trip took 10 hours, how fast was the friend driving? 75 mph Solve a Formula for a Specific Variable In the following exercises, use the formula $d=rt$ . Solve for $t$ when $d=350$ and $r=70$ in general Solve for $t$ when $d=240\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}r=60$ in general $t=4$ $t=\frac{d}{r}$ Solve for $t$ when $d=510\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}r=60$ in general Solve for $t$ when $d=175\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}r=50$ in general $t=3.5$ $t=\frac{d}{r}$ Solve for $r$ when $d=204\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}t=3$ in general Solve for $r$ when $d=420\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}t=6$ in general $r=70$ $r=\frac{d}{t}$ Solve for $r$ when $d=160\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}t=2.5$ in general Solve for $r$ when $d=180\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}t=4.5$ in general $r=40$ $r=\frac{d}{t}$ In the following exercises, use the formula $A=\frac{1}{2}bh$ . Solve for $b$ when $A=126\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}h=18$ in general Solve for $h$ when $A=176\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}b=22$ in general $h=16$ $h=\frac{2A}{b}$ Solve for $h$ when $A=375\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}b=25$ in general Solve for $b$ when $A=65\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}h=13$ in general $b=10$ $b=\frac{2A}{h}$ In the following exercises, use the formula I = Prt . Solve for the principal, P for $I=\text{}5,480,r=4%,$ $t=7\phantom{\rule{0.2em}{0ex}}\text{years}\phantom{\rule{0.2em}{0ex}}$ in general Solve for the principal, P for $I=\text{}3,950,r=6%,$ $t=5\phantom{\rule{0.2em}{0ex}}\text{years}\phantom{\rule{0.2em}{0ex}}$ in general $P=\text{}13,166.67$ $P=\frac{I}{rt}$ Solve for the time, t for $I=\text{}2,376,P=\text{}9,000,$ $r=4.4%$ in general Solve for the time, t for $I=\text{}624,P=\text{}6,000,$ $r=5.2%$ in general $t=2$ years $t=\frac{I}{\mathrm{Pr}}$ In the following exercises, solve. Solve the formula $2x+3y=12$ for y when $x=3$ in general Solve the formula $5x+2y=10$ for y when $x=4$ in general $y=-5$ $y=\frac{10-5x}{2}$ Solve the formula $3x-y=7$ for y when $x=-2$ in general Solve the formula $4x+y=5$ for y when $x=-3$ in general $y=17$ $y=5-4x$ Solve $a+b=90$ for $b$ . Solve $a+b=90$ for $a$ . $a=90-b$ Solve $180=a+b+c$ for $a$ . Solve $180=a+b+c$ for $c$ . $c=180-a-b$ Solve the formula $8x+y=15$ for y. Solve the formula $9x+y=13$ for y. $y=13-9x$ Solve the formula $-4x+y=-6$ for y. Solve the formula $-5x+y=-1$ for y. $y=-1+5x$ Solve the formula $4x+3y=7$ for y . Solve the formula $3x+2y=11$ for y . $y=\frac{11-3x}{4}$ Solve the formula $x-y=-4$ for y . Solve the formula $x-y=-3$ for y . $y=3+x$ Solve the formula $P=2L+2W$ for $L$ . Solve the formula $P=2L+2W$ for $W$ . $W=\frac{P-2L}{2}$ Solve the formula $C=\pi d$ for $d$ . Solve the formula $C=\pi d$ for $\pi$ . $\pi =\frac{C}{d}$ Solve the formula $V=LWH$ for $L$ . Solve the formula $V=LWH$ for $H$ . $H=\frac{V}{LW}$ ## Everyday math Converting temperature While on a tour in Greece, Tatyana saw that the temperature was 40 o Celsius. Solve for F in the formula $C=\frac{5}{9}\left(F-32\right)$ to find the Fahrenheit temperature. Converting temperature Yon was visiting the United States and he saw that the temperature in Seattle one day was 50 o Fahrenheit. Solve for C in the formula $F=\frac{9}{5}C+32$ to find the Celsius temperature. 10°C ## Writing exercises Solve the equation $2x+3y=6$ for $y$ when $x=-3$ in general Which solution is easier for you, or ? Why? Solve the equation $5x-2y=10$ for $x$ when $y=10$ in general Which solution is easier for you, or ? Why? ## Self check After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section. What does this checklist tell you about your mastery of this section? What steps will you take to improve? help me understand graphs what kind of graphs? bruce function f(x) to find each value Marlene I am in algebra 1. Can anyone give me any ideas to help me learn this stuff. Teacher and tutor not helping much. Marlene Given f(x)=2x+2, find f(2) so you replace the x with the 2, f(2)=2(2)+2, which is f(2)=6 Melissa if they say find f(5) then the answer would be f(5)=12 Melissa I need you to help me Melissa. Wish I can show you my homework Marlene How is f(1) =0 I am really confused Marlene what's the formula given? f(x)=? Melissa It shows a graph that I wish I could send photo of to you on here Marlene Which problem specifically? Melissa which problem? Melissa I don't know any to be honest. But whatever you can help me with for I can practice will help Marlene I got it. sorry, was out and about. I'll look at it now. Melissa Thank you. I appreciate it because my teacher assumes I know this. My teacher before him never went over this and several other things. Marlene I just responded. Melissa Thank you Marlene -65r to the 4th power-50r cubed-15r squared+8r+23 ÷ 5r write in this form a/b answer should be in the simplest form 5% convert to decimal 9/11 August Equation in the form of a pending point y+2=1/6(×-4) write in simplest form 3 4/2 August From Google: The quadratic formula, , is used in algebra to solve quadratic equations (polynomial equations of the second degree). The general form of a quadratic equation is , where x represents a variable, and a, b, and c are constants, with . A quadratic equation has two solutions, called roots. Melissa what is the answer of w-2.6=7.55 10.15 Michael w = 10.15 You add 2.6 to both sides and then solve for w (-2.6 zeros out on the left and leaves you with w= 7.55 + 2.6) Korin Nataly is considering two job offers. The first job would pay her $83,000 per year. The second would pay her$66,500 plus 15% of her total sales. What would her total sales need to be for her salary on the second offer be higher than the first? x > $110,000 bruce greater than$110,000 Michael Estelle is making 30 pounds of fruit salad from strawberries and blueberries. Strawberries cost $1.80 per pound, and blueberries cost$4.50 per pound. If Estelle wants the fruit salad to cost her $2.52 per pound, how many pounds of each berry should she use? nawal Reply$1.38 worth of strawberries + $1.14 worth of blueberries which=$2.52 Leitha how Zaione is it right😊 Leitha lol maybe Robinson 8 pound of blueberries and 22 pounds of strawberries Melissa 8 pounds x 4.5 = 36 22 pounds x 1.80 = 39.60 36 + 39.60 = 75.60 75.60 / 30 = average 2.52 per pound Melissa 8 pounds x 4.5 equal 36 22 pounds x 1.80 equal 39.60 36 + 39.60 equal 75.60 75.60 / 30 equal average 2.52 per pound Melissa hmmmm...... ? Robinson 8 pounds x 4.5 = 36 22 pounds x 1.80 = 39.60 36 + 39.60 = 75.60 75.60 / 30 = average 2.52 per pound Melissa The question asks how many pounds of each in order for her to have an average cost of $2.52. She needs 30 lb in all so 30 pounds times$2.52 equals $75.60. that's how much money she is spending on the fruit. That means she would need 8 pounds of blueberries and 22 lbs of strawberries to equal 75.60 Melissa good Robinson 👍 Leitha thanks Melissa. Leitha nawal let's do another😊 Leitha we can't use emojis...I see now Leitha Sorry for the multi post. My phone glitches. Melissa Vina has$4.70 in quarters, dimes and nickels in her purse. She has eight more dimes than quarters and six more nickels than quarters. How many of each coin does she have? 10 quarters 16 dimes 12 nickels Leitha A private jet can fly 1,210 miles against a 25 mph headwind in the same amount of time it can fly 1,694 miles with a 25 mph tailwind. Find the speed of the jet. wtf. is a tail wind or headwind? Robert 48 miles per hour with headwind and 68 miles per hour with tailwind Leitha average speed is 58 mph Leitha Into the wind (headwind), 125 mph; with wind (tailwind), 175 mph. Use time (t) = distance (d) ÷ rate (r). since t is equal both problems, then 1210/(x-25) = 1694/(×+25). solve for x gives x=150. bruce the jet will fly 9.68 hours to cover either distance bruce Riley is planning to plant a lawn in his yard. He will need 9 pounds of grass seed. He wants to mix Bermuda seed that costs $4.80 per pound with Fescue seed that costs$3.50 per pound. How much of each seed should he buy so that the overall cost will be $4.02 per pound? Vonna Reply 33.336 Robinson Amber wants to put tiles on the backsplash of her kitchen counters. She will need 36 square feet of tiles. She will use basic tiles that cost$8 per square foot and decorator tiles that cost $20 per square foot. How many square feet of each tile should she use so that the overall cost of the backsplash will be$10 per square foot? Ivan has $8.75 in nickels and quarters in his desk drawer. The number of nickels is twice the number of quarters. How many coins of each type does he have? mikayla Reply 2q=n ((2q).05) + ((q).25) = 8.75 .1q + .25q = 8.75 .35q = 8.75 q = 25 quarters 2(q) 2 (25) = 50 nickles Answer check 25 x .25 = 6.25 50 x .05 = 2.50 6.25 + 2.50 = 8.75 Melissa John has$175 in $5 and$10 bills in his drawer. The number of $5 bills is three times the number of$10 bills. How many of each are in the drawer? 7-$10 21-$5 Robert Enrique borrowed $23,500 to buy a car. He pays his uncle 2% interest on the$4,500 he borrowed from him, and he pays the bank 11.5% interest on the rest. What average interest rate does he pay on the total \$23,500? (Round your answer to the nearest tenth of a percent.)
{{ message }} Instantly share code, notes, and snippets. # Michael Scroggiescrogster Last active Aug 23, 2016 Adding a png logo to an R plot #I write papers and reports using Rmarkdown, knitting to pdf. #My co-authors use word, so I usually knit a version to docx for them to edit. #This mostly works flawlessly, except when I use \begin{align} and \end{align} to delimit display equations. #Using \begin{align} and \end{align} results in nicely centered and numbered equations in pdf, BUT #the equations don't render at all when I try to make a docx. #As a quick hack so I can make a passable docx to share with my co-authors, the following script replaces all instances #of \begin{align} and \end{align} with "". The modified version of the original Rmarkdown file knits to docx just fine, # albeit without equation numbers.
# Sketch a scatterplot showing data for which the correlation is [r = -1]. Sketch a scatterplot showing data for which the correlation is [r = -1]. You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it yagombyeR Since [r = —1], there is a perfect linear relationship between those two variables (i.e. a deterministic relationship) and they have a negative association (one variable tends to decrease as the other increases). For example, the data [x 1 2 3 4 5 y 13 11 9 7 5] has a correlation of —1 since it follows the line [y = 15— 2x] (which is strictly decreasing function, so the association is negative). Here is a picture of the data: Result: For example, a set of data [x 1 2 3 4 5 y 13 11 9 7 5] has a correlation -1.
# Safari 7.0 Can Preview Adobe Photoshop .psd Files Here’s a quick tip: Safari 7.0 in OS X Mavericks can preview Adobe Photoshop ( .psd file ). Here’s a quick tip: Safari 7.0 in OS X Mavericks can preview Adobe Photoshop ( .psd file ). To do preview a .psd file in Safari, you can either open it from a link and it will show up just like a .pdf file for example ( see below ), or just drag-an-drop the file onto Safari ( see above ). Of course Preview in OS X can also preview .psd file, but because Safari can do it now, you won’t need to download the files to preview them. NOTE: Safari can also open .eps files ( no idea if it could do that before ), and apparently it can open Adobe Illustrator ( .ai file ) too, but don’t take my word on that. Try a few files to see if it work or not… SUPPORT FSM Monero (XMR) `43GnqUNJrTi9QyL7kEH8vM8pgWGCE6bjv1FSRipeNMM4TTeNnUVsRBb6MfMpQYxtLE7ReonxVVSXz2rFCEdW5H11LC3x73b` Bitcoin (BTC) `3PvaJPytg4pApTP5yCGpr62pRtudMgyfMQ` Ethereum (ETH) `0xd3c8677A4CfD9e8b4dFBb7720be2adb490Bd36b2`
# How do you evaluate 20P2? Mar 5, 2017 Steps and result are outlined below... #### Explanation: The general formula for a permutation is ""_nP_r = (n!)/(n-r!) In this case, $n = 20$ and $r = 2$, so our equation becomes ""_20P_2 = (20!)/(18!) = (20xx19) = 380
Chapter 26, Problem 26.1APE ### Accounting 27th Edition WARREN + 5 others ISBN: 9781337272094 Chapter Section ### Accounting 27th Edition WARREN + 5 others ISBN: 9781337272094 Textbook Problem # Average rate of returnDetermine the average rate of return for a project that is estimated to yield total income of $180,000 over five years, has a cost of$400,000, and has a $50,000 residual value. To determine Average Rate of Return: Average rate of return is a method that measures the average earnings of a particular business, as a percentage of the average investment. It is also known as accounting rate of return. Calculation of Average rate of return: AverageRateofReturn}=(EstimatedAverageAnnualIncome)(AverageInvestment)×100 To determine: The average rate of return for a project. Explanation The average rate of return for the project is 16%. Working Notes: Calculation of Average Rate of Return: AverageRateofReturn}=(EstimatedAverageAnnualIncome)(AverageInvestment)×100=($36,000)(\$225,00)×100=16% (Refer equation (1) and (2) of the working notes) Calculation of Average Annual Income: Average< ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution #### The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started
# President President of Slovakia earns a monthly € 7844 per month. How many times earn than Jimmy's monther salary € 612? 15.3.2014 the presidential election, which decides who will almost effortlessly 5 years to receive such space salary for nothing;) Correct result: n =  12.8 #### Solution: $n=7844\mathrm{/}612=12.8$ We would be pleased if you find an error in the word problem, spelling mistakes, or inaccuracies and send it to us. Thank you! ## Next similar math problems: • Railways - golden parachutes As often happens in Slovakia habit, the state's financial institution which takes from poverty and gorilas give. A hardworking punishing taxes. Let's look at a short work of director Railway Company ZSSK - Mgr . P. K. : 18 months 'work' as director reduct • University bubble You'll notice that the college up slowly every other high school. In Slovakia/Czech republic a lot of people studying political science, mass media communication, social work, many sorts of management MBA. Calculate how many times more earns clever 25-yea • Diophantus We know little about this Greek mathematician from Alexandria, except that he lived around 3rd century A. D. Thanks to an admirer of his, who described his life through an algebraic riddle, we know at least something about his life. Diophantus's youth las The Slovakia circulates a lot of myths, particularly ideas about what must be free. For example - education should be free - children of poor parents should know at least read and write. Calculate how much a student would have to pay per hour of teaching, • Youth track Youth track from Hronská Dúbrava to Banská Štiavnica which announced cancellation attracted considerable media attention and public opposition, has cost 6.3 euro per capita and revenue 13 cents per capita. Calculate size of subsidies to trip group of 28 s • Rates When gas consumption, the consumer may choose one of two rates: rate A - which pays 0.4 € per 1 m3 of gas a flat monthly fee of 3.9 € (regardless of consumption) rate B - which pays 0.359 € per 1 m3 of gas a flat monthly fee of 12.5 € From what monthly • Pension or fraud Imagine that your entire working life working honestly and pay taxes and to Social Insurance (in Slovakia). You have a gross wage 730 Euros, you pay social insurance (you and your employer) monthly € 263 during 44 years and your retirement is 354 Eur How much and how many times is 72.1 greater than 0.00721? • ŽSR Calculate fixed annual personnel costs of operating monorail line 118 km long if every 5 km is station, which serve three people - one dispatcher and two switchman in 4-shift operation. Consider the average salary of the employee 885 €. Worker knits the baskets which then sells in the market. He maked seven baskets on Monday and one more each day than he did in the previous day. How many times more baskets did he make till the following Monday evening compared to the first day when he di • Three friends Cuba, Matthew and their friend Adam found a brigade during their weekend weekends because they wanted to make a joint trip to the Alps that they planned for the spring break. Cuba enjoyed the skiing trip very much, so he was not lazy to get up and went to • TV competition In the competition, 10 contestants answer five questions, one question per round. Anyone who answers correctly will receive as many points as the number of competitors answered incorrectly in that round. One of the contestants after the contest said: We g • Parcel service Parcel from U.S. to Slovakia costs 5.3 €. Parcel from Slovakia to the U.S. costs 49 €. Calculate how many times is Slovak Post more expensive than the U.S. parcel service. • Family Dad has two years more than mom. Mom has 5 times more than Katy. Katy has 2 times less than Jan. Jan is 10 years old. How old is everyone in the family? How old are all together? • Three ships There are three ships moored in the port, which sail together. The first ship returns after two weeks, the second after four weeks, and the third after eight weeks. In how many weeks the ships will meet in the port for the first time? How many times have • Ravens The tale of the Seven Ravens were seven brothers, each of whom was born exactly 2.5 years after the previous one. When the eldest of the brothers was 2-times older than the youngest, mother all curse. How old was seven ravens brothers when their mother cu • Forestry workers In the forest is employed 56 laborers planting trees in nurseries. For 8 hour work day would end job in 37 days. After 16 days, 9 laborers go forth? How many days are needed to complete planting trees in nurseries by others, if they will work 10 hours a d
## New trends in microlocal analysis.(English)Zbl 0859.00023 Tokyo: Springer. viii, 242 p. (1997). The articles of this volume will be reviewed individually. Indexed articles: Bony, Jean-Michel, Fourier integral operators and Weyl-Hörmander calculus, 3-21 [Zbl 0885.35154] Lerner, Nicolas, The Wick calculus of pseudo-differential operators and energy estimates, 23-37 [Zbl 0893.35144] Morimoto, Mitsuo; Fujita, Keiko, Eigenfunctions of the Laplacian of exponential type, 39-58 [Zbl 0877.35082] Moritoh, Shinya, Wavelet transforms and operators in various function spaces, 59-68 [Zbl 0878.35124] Okada, Yasunori; Yamane, Hideshi, Characteristic Cauchy problems in the complex domain, 69-80 [Zbl 0886.35009] Uchikoshi, Keisuke, Stokes operators for microhyperbolic equations, 81-99 [Zbl 0871.35003] Aoki, Takashi, Instanton-type formal solutions to the second Painlevé equation with a large parameter, 103-112 [Zbl 0873.34047] Gérard, Christian, Pseudodifferential and Fourier integral operators in scattering theory, 113-115 [Zbl 0872.35141] Kawai, Takahiro; Stapp, Henry P., On infrared singularities, 117-123 [Zbl 0887.47047] Kozono, Hideo; Yamazaki, Masao, The Navier-Stokes equation with distributions as initial data and application to self-similar solutions, 125-141 [Zbl 0874.35087] Tajima, Shinichi, Bloch function in an external electric field and Berry-Buslaev phase, 143-156 [Zbl 0869.35093] Andronikof, Emmanuel, An application of symbol calculus., 159-164 [Zbl 1156.58304] Andronikof, Emmanuel; Tose, Nobuyuki, Elliptic boundary value problems in the space of distributions, 165-169 [Zbl 0877.58053] Boutet de Monvel, Louis, On the holonomic character of the elementary solution of a partial differential operator, 171-177 [Zbl 0878.35003] D’Agnolo, Andrea; Schapira, Pierre, Kernel calculus and extension of contact transformation to $$D$$-modules, 179-190 [Zbl 0872.32007] Honda, Naofumi, Microfunction solutions of holonomic systems with irregular singularities, 191-204 [Zbl 0872.35005] Oaku, Toshinori, Some algorithmic aspects of the $$D$$-module theory, 205-223 [Zbl 0872.32008] Takeuchi, Kiyoshi, On higher-codimensional boundary value problems, 225-234 [Zbl 0889.58072] Uchida, Motoo, Kashiwara’s microlocal analysis of the Bergman kernel for domains with corner, 235-242 [Zbl 0878.35004] ### MSC: 00B25 Proceedings of conferences of miscellaneous specific interest 35-06 Proceedings, conferences, collections, etc. pertaining to partial differential equations 32-06 Proceedings, conferences, collections, etc. pertaining to several complex variables and analytic spaces ### Keywords: Microlocal analysis
# Tikz: drawing a line to the edge of a circle If I wanted to draw a line to the edge of a circle, I could simple do the following \pgfmathsetmacro{\a}{2} \draw (0, 0) circle (\a cm); \draw[-latex] (0, 0) -- ({\a * cos(angle)}, {\a * sin(angle)}); where angle is what ever I specified. So the problem I am facing is that I want to draw two sets of 3 lines. One set will correspond to the smaller circles and the other to the larger circles. The problem is the circles aren't define as above. Additionally, I want to specify different angles for each line. The code for the circles in question is \documentclass[tikz,convert=false]{standalone} \usetikzlibrary{through,calc,intersections} \makeatletter % needs to be used after 'circle through'! % this can be avoided by slightly changing the source \pgfmathsetlengthmacro\pgf@tempa{\pgfkeysvalueof{/pgf/minimum width}+2*(#1)}% \pgfset{/pgf/minimum width/.expanded=\pgf@tempa}% }% }} \tikzset{ special style/.code={% \if#1\tikz@nonactiveexlmark \pgfkeysalso{@special style}% \else \pgfkeysalso{style/.expanded=#1}% \fi }, @special style/.style={draw=none,fill=none} } \makeatother \begin{document} \begin{tikzpicture}[scale = .7, every label/.append style = {font = \small}, dot/.style = {fill, outer sep = +0pt, inner sep = +0pt, shape = circle, draw = none, label = {#1}}, dot/.default =, small dot/.style = {minimum size = 2.5pt, dot = {#1}}, small dot/.default =, big dot/.style = {minimum size = 5pt, dot = {#1}}, big dot/.default = ] \begin{scope}[rotate around ={-23.9625:(.75, -1)}] \begin{scope} \clip(-1, -4) rectangle (.5, 4); \draw [samples = 50, domain = -0.99:0.99, xshift = 1cm, red, thick] plot ({0.8 * (-1 - (\x)^2) / (1 - (\x)^2)}, {1.83 * (-2) * (\x) / (1 - (\x)^2)}); \end{scope} \node[scale = .75, small dot = {below: $$P_1$$}] (P1) at (3, 0) {}; \node[scale = .75, small dot = {above, left = 3.5pt: $$P_2$$}] (P2) at (-1, 0) {}; \node[scale = .75, small dot = {below, right = 5pt: $$F$$}] (F) at (.75, -1) {}; \path[blue] (F) edge (P1) edge (P2) (P1) edge (P2); \path ($(P1)!.7!(P2)$) coordinate (Fm) node[small dot = {below = 10pt, right = 3pt: $$F_m^*$$}] {}; \foreach \cPoint in {1, 2} {.0cm, .4cm, .8cm} \node[draw, red, name path global/.expanded = \cPoint:\cRadius] at (P\cPoint.center) (\cPoint:\cRadius) [circle through = (Fm), \foreach \cRadius in {1, 2} { \tikzset{name intersections = {of/.expanded = {1:\cRadius} and \foreach \cSolution in {1, 2} \node[black, scale = .5, big dot = {right, below = 5pt: $\ifnum\cSolution = 1\expandafter\tilde F\else F\fi^*_\cRadius$}] } \end{scope} \end{tikzpicture} \end{document} So for the bigger circles, I want to draw lines in decreasing order from 70, 35, and 0 degrees from the P1, and for the smaller circles, -225, -180, and -145 degrees again in increasing line length order. - @dustin The \tikzset with the definition of the special style is not needed anymore. This was only in it to simply hide a circle with the option ! (which should have been done better and should have been better explained, I apologize). The circles are in fact nodes with the names 1:0, 1:1, 1:2 and 2:0, 2:1 and 2:2. You can use {1:0}.<some angle> to access a point on the circle. Example: \draw ({1:2}.70) -- ({1:1}.35) -- ({1:0}.0); –  Qrrbrbirlbel Jun 16 '13 at 21:09 @Qrrbrbirlbel I understand now. If you want to make your comment and answer, I will accept it. –  dustin Jun 16 '13 at 21:18 @dustin Are the big ones (the 1 comes from the center dot: P1). By the way, while ({\a * cos(angle)}, {\a * sin(angle)}) is mathematical correct, TikZ offers a much simpler input: (angle:\a), the polar coordinate syntax. This also explains why the node names in my example above need to be enclosed in braces. I’ll post an answer soon. –  Qrrbrbirlbel Jun 16 '13 at 21:18 This is going to be a very long answer (both in length and detailedness). Keep calm and keep reading. :) ## How to access a coordinate on a circle/an ellipse: polar coordinates While you can access a coordinate on a circle (or an ellipse with different radii) by using ({<x radius> * cos(<angle>)}, {<y radius> * sin(<angle>)}) TikZ offers a much simpler input with polar coordinates. Its implicit form is: • (<angle>:<radius>) for a coordinate on a circle and • (<angle>:<x radius> and <y radius>) for a coordinate on an ellipse. ## How to access a coordinate on a circle/an ellipse which center does not lie in the origin: shifting/calculations Of course, in this way, we can only access coordinates on circles/ellipses which center is located in the origin. The shift key comes in handy when you want to access other polar coordinates. The sequences (the first without, the second with the calc library) \draw[shift=(P1)] (120:1cm) -- (50:.5cm) -- (40:.2cm); \draw ($(P1)+(120:1cm)$) -- ($(P1)+(50:.5cm)$) -- ($(P1)+(40:.2cm)$); would connect coordinates that lie on circles around (P1). ## How to access a polar coordinate on a (circular) node: the endless possibilities of nodes But do we actually know the exact radii? No, we also have never specified them at all. The through library makes it possible to draw circles through points without specifying a radius. We could certainly use the calc library and its let … in … path operator to calculate it, but we don’t need to. Besides (compass) anchors like north and south east every shape also includes (or should include) a definition for its border. To make a long story short (it’s rather different for other shapes that are not circle or ellipse): All coordinate on a circular node are easily accessible: (<node name>.<angle>) The created circles in our examples are circle nodes named: • for circles around (P1) • 1:0 (extra radius = .0cm) • 1:1 (extra radius = .4cm) • 1:2 (extra radius = .8cm) • for circles around (P2) • 2:0 (extra radius = .0cm) • 2:1 (extra radius = .4cm) • 2:2 (extra radius = .8cm) (There is something else to consider when connection nodes in this or any other way: The default value of outer xsep and outer ysep is set to .5\pgflinewidth which means that the accessed anchors/angles lie on the outside of the border of the path (a line has a width!). This does not apply here since circle through also sets both outer separators to zero making it more like the typical circle/ellipse path operators.) ### Note to myself: Think ahead! ;) But why does \draw (1:2.70) -- (1:1.35) -- (1:0.0); give such faulty output? Well, when TikZ parses coordinates it checks for various text sequences that implicit certain coordinates. After checking for coordinate systems (cs:), intersections (intersections), coordinates perpendicular and horizontal to other coordinates (|- and -|), it checks first for polar (:), then for Cartesian coordinates (,). If none of these apply, only now the coordinate is interpreted as a node specifcation. (So the coordinates above are interpreted as polar coordinates with angles 1 and radii of 2.7,1.35and0.0.) We can solve this by: • protecting the : from the parser: \draw ({1:2}.70) -- ({1:1}.35) -- ({1:0}.0); • using the explicit form of the node coordinates: \draw (node cs: name=1:2, angle=70) -- (node cs: name=1:1, anchor=35) -- (node cs: name=1:0, anchor=east); The options angle and anchor interchangeable. • not using : in the first place (recommended). Naming the nodes from 1-0 through 2-2 makes it easier to use the implicit form: \draw (1-2.70) -- (1-1.35) -- (1-0.0); \draw (2-2.-225) -- (2-1.-180) -- (2-0.-145); I hope, I have understood you correctly regarding what points you want to connect. Or are you looking for the following? \path (P1) edge (1-2.70) edge (1-1.35) edge (1-0.0) (P2) edge (2-2.-225) edge (2-1.-180) edge (2-0.-145); Note: I have used : in the previous answer to have the same names for the nodes as for their paths. Using - in the path names has failed in some stage of answering, : worked somehow. Now, it works again with -. Color me puzzled. ## Code \documentclass[tikz,convert=false]{standalone} \usetikzlibrary{through,calc,intersections} \makeatletter % needs to be used after 'circle through'! % this can be avoided by slightly changing the source \pgfmathsetlengthmacro\pgf@tempa{\pgfkeysvalueof{/pgf/minimum width}+2*(#1)}% \pgfset{/pgf/minimum width/.expanded=\pgf@tempa}% }% }} \makeatother \begin{document} \begin{tikzpicture}[scale = .7, every label/.append style = {font = \small}, dot/.style = {fill, outer sep = +0pt, inner sep = +0pt, shape = circle, draw = none, label = {#1}}, dot/.default =, small dot/.style = {minimum size = 2.5pt, dot = {#1}}, small dot/.default =, big dot/.style = {minimum size = 5pt, dot = {#1}}, big dot/.default = ] \begin{scope}[rotate around ={-23.9625:(.75, -1)}] \begin{scope} \clip(-1, -4) rectangle (.5, 4); \draw [samples = 50, domain = -0.99:0.99, xshift = 1cm, red, thick] plot ({0.8 * (-1 - (\x)^2) / (1 - (\x)^2)}, {1.83 * (-2) * (\x) / (1 - (\x)^2)}); \end{scope} \node[scale = .75, small dot = {below: $$P_1$$}] (P1) at (3, 0) {}; \node[scale = .75, small dot = {above, left = 3.5pt: $$P_2$$}] (P2) at (-1, 0) {}; \node[scale = .75, small dot = {below, right = 5pt: $$F$$}] (F) at (.75, -1) {}; \path[blue] (F) edge (P1) edge (P2) (P1) edge (P2); \path ($(P1)!.7!(P2)$) coordinate (Fm) node[small dot = {below = 10pt, right = 3pt: $$F_m^*$$}] {}; \foreach \cPoint in {1, 2} {.0cm, .4cm, .8cm} \node[draw, red, name path global/.expanded = \cPoint-\cRadius] at (P\cPoint.center) (\cPoint-\cRadius) [circle through = (Fm), \foreach \cRadius in {1, 2} { \tikzset{name intersections = {of/.expanded = {1-\cRadius} and \foreach \cSolution in {1, 2} \node[black, scale = .5, big dot = {right, below = 5pt: $\ifnum\cSolution = 1\relax\tilde F\else F\fi^*_\cRadius$}] } \end{scope} \draw (1-2.70) -- (1-1.35) -- (1-0.0); \draw (2-2.-225) -- (2-1.-180) -- (2-0.-145); % or \path (P1) edge (1-2.70) edge (1-1.35) edge (1-0.0) (P2) edge (2-2.-225) edge (2-1.-180) edge (2-0.-145); \end{tikzpicture} \end{document} ## Output ### Connecting coordinates on the circle with the center point - From your first comment, I knew what I needed to do. You don't have the lines I was looking to create but I was able to create it from that comment. If you want to see the code or what the intent was, just ask. –  dustin Jun 17 '13 at 1:05 @dustin That’s okay. I have updated my answer with an image of the other idea how I understood your request. Maybe that’s it. Or not. At least I could help you. :)` –  Qrrbrbirlbel Jun 17 '13 at 1:13 That was the intent. –  dustin Jun 17 '13 at 1:14
dc.contributor.author V. C. en_US dc.contributor.author Thomas P. en_US dc.date.accessioned 2010-12-03T15:04:30Z dc.date.available 2010-12-03T15:04:30Z dc.date.issued 2010 en_US dc.identifier.citation Mavron , V C & McDonough , T P 2010 , ' The Dimension of the Code of a Strongly Resolvable Design ' pp. 203-206 . en_US dc.identifier.other PURE: 154740 en_US dc.identifier.other dspace: 2160/5967 en_US dc.identifier.uri http://hdl.handle.net/2160/5967 dc.description.abstract This paper gives an explicit value for the dimension of the code of a strongly resolvable design over the field of prime order $p$ in the case when $p$ is not a divisor of $k-\rho$, where $k$ is the block size of the design and $\rho$ is the number of points in the intersection of two distinct blocks in the same resolution class. en_US dc.format.extent 4 en_US dc.relation.ispartof en_US dc.title The Dimension of the Code of a Strongly Resolvable Design en_US dc.contributor.pbl Institute of Mathematics & Physics (ADT) en_US dc.contributor.pbl Algebraic Combinatorics en_US  Files in this item Aside from theses and in the absence of a specific licence document on an item page, all works in Cadair are accessible under the CC BY-NC-ND Licence. AU theses and dissertations held on Cadair are made available for the purposes of private study and non-commercial research and brief extracts may be reproduced under fair dealing for the purpose of criticism or review. If you have any queries in relation to the re-use of material on Cadair, contact [email protected].
# Changelog¶ We try to maintain changelog in way outlined by the keep a changelog project. ## v0.9.0 2018-11-11¶ • New stylo.math module! Currently it contains a lerp function to do linear implementation between two values a and b • New stylo.design module! This is the start of the “next level” in styo’s API abstracting away from the lower level objects such as shapes and colormaps. • This module adds the notion of a parameter group, this is a collection of values that can be passed into functions as a single object using the dictionary unpacking syntax (**params) Parameter groups are defined using the define_parameter_group function and taking a name and a comma separated string of parameter names. There is also define_time_dependent_parameter_group that can be used to define a parameter group that depends on time. Currently there are two pre-defined paramters groups, Position and Trajectory. They both combine the x and y values into a single object, with the second being the time dependent version of the first. Finally there are two built-in implementations of these parameter groups. StaticPosition and ParametricPosition the first takes two values and returns them. The second takes two functions in time and calls them at each supplied time value. ## v0.8.0 - 2018-11-07¶ • New Timeline system! This finally introduces explicit support for animations to stylo. ## v0.7.0 - 2018-10-25¶ • New Line shape! • New ImplicitXY shape! Draw any curve that is implicitly defined by a function $$f(x, y)$$ ### Changed¶ • The Circle and Ellipse shapes now take more arguments. By default the shapes will now draw an outline rather than a filled in shape. ## v0.6.1 - 2018-10-20¶ • New preview keyword argument to images, set this to False if you don’t want a matplotlib figure returned. • New encode keyword argument to images, setting this to True will return a base64 encoded string representation of the image in PNG format. ### Fixed¶ • Preview images are no longer displayed twice in jupyter notebooks • Preview images no longer display the x and y axis numbers. ## v0.6.0 - 2018-10-07¶ #### Users¶ • New Triangle shape • Shapes can now be inverted using the ~ operator. #### Contributors¶ • Added new shape InvertedShape which handles the inversion of a shape behind the scenes. • Tests for all the composite shapes and operators. • More documentation on how to get involved ### Changed¶ #### Users¶ • Shapes now have defined __repr__ methods, including shapes that have been combined, where a representation of a tree will be produced showing how the various shapes have been combined together. • Preview images in Jupyter notebooks are now larger by default This release of stylo was brought to you thanks to contributions from the following awesome people! ## v0.5.0 - 2018-09-27¶ #### Users¶ • New Image object LayeredImage object that can now draw more than one object • Added an introductory tutorial for first time users to the documentation • Functions from the stylo.domain.transform package can now be applied to shapes, meaning that most images can now be made without handling domains directly. #### Contributors¶ • Added a Drawable class, this allows a domain, shape and colormap to be treated as a single entity. • Added a render_drawable function that takes a drawable and some existing image data and applies it to the data. • Added a get_real_domain function that given a width, height and scale returns a RectangularDomain with appropriate aspect ratio, $$(0, 0)$$ at the centre of the image and the scale corresponding to the interval $$[ymin, ymax]$$ • We now make use of the [scripts] section of Pipfile so running common commands is now easier to remember • pipenv run test: to run the test suite • pipenv run lint: to lint the codebase • pipenv run docs: to run a full build of the documentation • pipenv run docs_fast: to run a less complete but faster build of the documentation. ### Changed¶ #### Users¶ • Altered SimpleImage to no longer take a domain, reducing the cognitive load on first time users. It now instead takes an optional scale variable to control the size of the domain underneath. This also means that the domain now automatically matches the aspect ratio of the image so no more distortion in non-square images. #### Contributors¶ • The tests now take advantage of multi-core machines and should now run much faster • Building the docs now takes advantage of multi-core machines and should now run much faster. ### Fixed¶ #### Contributors¶ • Fixed crashes in exampledoc.py and apidoc.py for first time users • Fixed issue with sed on a Mac for people running the devenv-setup.sh script This release of stylo was brought to you thanks to contributions from the following awesome people! ## v0.4.2 - 2018-09-17¶ • Image objects can now take a size keyword argument to adjust the size of the matplotlib preview plots ## v0.4.1 - 2018-09-17¶ ### Fixed¶ • Fixed an issue with setup.py that meant most of the code wasn’t published to PyPi! ## v0.4.0 - 2018-09-16¶ Out of the ashes of the previous version rises the biggest release to date! Stylo has been rewritten from the ground up and should now be easier to use, more modular and easier to extend! None (or very little) of the original code remains and not everything has been reimplemented yet so some of the features listed below may not be available in this version. There is a lot more work to be done particularly in the tests and docs departments however core functionality is now in place and it’s been long enough since the previous release. I’m hoping that from now on releases will be smaller and more frequent as what is now here is refined and tested to create a stable base from which Stylo can be extended. #### Users¶ One of the main ideas behind the latest incarnation of stylo is the idea of interfaces borrowed from Java. Where you have an object such as Shape and all shapes have certain behaviors in common represented by methods on an interface. Then there are a number of implementations that provide the details specific to each shape. In stylo this is modelled by having a number of abstract classes that define the interfaces that represent different parts of the stylo image creation process. Then regular classes inherit from these to provide the details. With that in mind this release provides the following “interfaces”. • New RealDomain and RealDomainTransform interfaces, these model the mapping of a continuous mathematical domain $$D \subset \mathbb{R}^2$$ onto a discrete grid of pixels. • New Shape interface this models the mapping of the grid of values generated by a domain into a boolean numpy array representing which pixels are a part of the shape. • New ColorSpace system this currently doesn’t do much but should allow support for the use of different color representations. Current only 8-bit RGB values are supported. • New ColorMap interface, this represents the mapping of the boolean numpy array generated by the Shape interface into a numpy array containing the color values that will be eventually interpreted as an image. • New Image interface. Implementations of this interface will implement common image creation workflows as well as providing a unified way to preview and save images to a file. With the main interfaces introduced here is a (very) brief introduction to each of the implementations provided in this release RealDomain • RectangularDomain: Models a rectangular subset of the :mathxy-plane $$[a, b] \times [c, d] \subset \mathbb{R}^2$$ • SquareDomain: Similar to above but in the cases where $$c = a$$ and $$d = b$$ • UnitSquare: Similar to above but the case where $$a = 0$$ and $$b = 1$$ RealDomainTransform • HorizontalShear: Given a domain this applies a horizontal shear to it • Rotation: Given a domain this rotates it by a given angle • Translation: Given a domain this applies a translation to it • VerticalShear: Given a domain this applies a vertical shear to it Shape • Square • Rectangle • Circle • Ellipse ColorSpace • RGB8: 8-bit RGB valued colors ColorMap • FillColor: Given a background and a foreground color. Color all False pixels with the background color and color all the True pixels the foreground color. Image • SimpleImage: Currently the only image implementation, this implements one of the simplest workflows that can result in an interesting image. Take a Domain, pass it to a Shape and then apply a ColorMap to the result. #### Extenders/Contributors¶ From the beginning this new attempt at stylo has been designed with extensibility in mind so included in the library are also a number of utilities aimed to help you develop your own tools that integrate well with the rest of stylo. Domains and DomainTransforms While stylo only currently ships with RealDomain and RealDomainTransform interfaces it is developed in a way to allow the addition of new “families” of domain. If you want to create your own stylo provides the following functions: • define_domain: This will write your base domain class (like the RealDomain) just give it a name and a list of parameters. • define_domain_transform: The will write the DomainTransform base class for you. In addition to defining new families stylo provides a few helper classes to help you write your own domains and transforms for the existing RealDomain family • PolarConversion: If your domain is only “interesting” in cartesian coordinates this helper class will automatically write the conversion to polar coordinates for you. • CartesianConversion: If your domain is only “interesting” in polar coordinates this helper class will automatically write the conversion to cartesian coordinates for you. stylo.testing stylo also comes with a testing package that provides a number of utilities to help you ensure that any extensions you write will integrate well with the rest of stylo • BaseRealDomainTest: This is a class that you can base your test case on for any domains in the RealDomain family to ensure that they function as expected. • define_domain_test: Similar to the define_domain and define_domain_transform functions this defines a base test class to ensure that domains in your new family work as expected. • BaseShapeTest Basing your test case on this for any new shapes will ensure that your shapes will function as expected by the rest of stylo • define_benchmarked_example: This is for those of you wishing to contribute an example to the documentation, using this function with your example code will ensure that your example is automatically included in the documentation when it is next built. stylo.testing.strategies This module defines a number of hypothesis strategies for common data types in stylo. Using these (and hypothesis) in your test cases where possible will ensure that your objects will work with the same kind of data as stylo itself. ### Removed¶ Everything mentioned below. ## v0.3.0 - 2017-12-09¶ • New Domain class, it is responsible for generating the grids of numbers passed to Drawables when they are mapped onto Images. It replaces most of the old decorators. • Drawables are now classes! Any drawable is now a class that inherits from Drawable, it brings back much of the old Puppet functionality with some improvements. • More tests! ### Changed¶ • ANDing Images (a & b) has been reimplemented so that it hopefully makes more sense. The alpha value of b is used to scale the color values of a. • Along with the new Domain system mapping Drawables onto Images has been reworked to hopefully make coordinate calculations faster ### Removed¶ • stylo/coords.py has been deleted, this means the following functions and decorators no longer exist + mk_domain - Domains are now a class + cartesian (now built into the new Domain object) + polar (now built into the new Domain object) + extend_periocally (now the .repeat() method on the new Domain object) + translate (now the .transform() method on the new Domain object) + reflect (not yet implemented in the new system) ## v0.2.3 - 2017-11-15¶ • Image objects can now be added together, this is simply the sum of the color values at each pixel • Image objects can now be subtracted, which is simply the difference of the colour values at each pixel ### Changed¶ • Renamed hex_to_rgb to hexcolor. It now also can cope with rgb and rgba arguments, with the ability to promote rgb to rgba colors ## v0.2.2 - 2017-10-30¶ • Keyword argument ‘only’ to the ‘polar’ decorator which allows you to ignore the x and y variables if you dont need them ### Fixed¶ • Forgot to expose the objects from interpolate.py to the top level stylo import • Examples in the documentation and enabled doctests for them ## v0.2.1 - 2017-10-29¶ ### Fixed¶ • Stylo should now also work on python 3.5 ### Removed¶ • Deleted stylo/motion.py as its something better suited to a plugin • Deleted Pupptet, PuppetMaster and supporting functions as they are broken and better to be rewritten from scratch ## v0.2.0 - 2017-10-27¶ • Sampler object which forms the basis of the new Driver implementations • Channel object which can manage many Sampler-like objects to form a single ‘track’ of animation data • A very simple Driver object which allows you to collect multiple Channel objects into a single place #### Docs¶ • Added the following reference pages • Image • Drawable • Primitive • Sampler • A How-To section • How-To invert the colours of an Image ### Changed¶ • Image.__and__() now uses a new method which produces better results with colour images ### Fixed¶ • Numpy shape error in Image.__neg__() ### Removed¶ • stylo.prims.thicken was redundant so it has been removed Initial Release
# Tensorproduct of finite fields I perfectly understand the tensor product of vector spaces over finite fields. But when I regard these vector spaces as finite fields I get confused. Let the vector spaces $\mathbb{F}_p^m$ and $\mathbb{F}_p^n$ over the finite field $\mathbb{F}_p$ be given. Then their tensor product $\mathbb{F}_p^m\otimes \mathbb{F}_p^n$ is the vector space $\mathbb{F}_p^{mn}$. The vector spaces $\mathbb{F}_p^m$ and $\mathbb{F}_p^n$ can also be considered as finite fields $\mathbb{F}_{p^m}$ and $\mathbb{F}_{p^n}$. For that, let $r,s\in\mathbb{F}_p[X]$ be irreducible with $\deg(r)=m$ and $\deg(s)=n$. Hence, $\mathbb{F}_{p^m}\cong\mathbb{F}_p[X]/r$ and $\mathbb{F}_{p^n}\cong\mathbb{F}_p[X]/s$. Is there a canonical way to express the tensor product of these fields in terms of $r$ and $s$? Something like $\mathbb{F}_p[X]/r\otimes \mathbb{F}_p[X]/s\cong\mathbb{F}_p[X]/(r\otimes s)$. How can $r\otimes s$ be defined? Is it the insertion of $r$ into $s$, because $deg(r(s(x)))=mn$? But is the insertion of irreducible polynomials into each other always irreducible? Does anyone know a piece of literature dealing with this problem? Thx. Chris A bit more detailed look at what Qiaochu said. Unfortunately my answer won't really be expressed in terms of $r$ and $s$. I hope it still helps you in some way. We know that $\Bbb{F}_p[X]\otimes \Bbb{F}_{p^n}\cong\Bbb{F}_{p^n}[X]$ and that $\Bbb{F}_{p^n}$ is a flat $\Bbb{F}_p$-module. Let us consider the short exact sequence $$0\to\Bbb{F}_p[X]\to\Bbb{F}_p[X]\to\Bbb{F}_p[X]/\langle r\rangle\to0,$$ where the first map is multiplication by $r$, and the last module is isomorphic to $\Bbb{F}_{p^m}$. Upon tensoring with $\Bbb{F}_{p^n}$ this gives rise to the short exact sequence $$0\to\Bbb{F}_{p^n}[X]\to\Bbb{F}_{p^n}[X]\to\Bbb{F}_{p^n}[X]/\langle r\rangle\to0.$$ Therefore a comparison of the last modules shows that $$\Bbb{F}_{p^m}\otimes \Bbb{F}_{p^n}\cong \Bbb{F}_{p^n}[X]/\langle r\rangle.$$ The polynomial $r$ has no multiple zeros in $\overline{\Bbb{F}_p}$, so over $\Bbb{F}_{p^n}$ it factors into a product of distinct factors $$r=\prod_{i=1}^t r_i$$ for some irreducible polynomials $r_i\in\Bbb{F}_{p^n}[X]$. Because these factors are distinct, the Chinese remainder theorem tells us that $$\Bbb{F}_{p^n}[X]/\langle r\rangle\cong\bigoplus_i \Bbb{F}_{p^n}[X]/\langle r_i\rangle.$$ Note that everything above applies equally well to any finite extension of fields $L/K$. There is no need for the fields $L,K$ to be finite. We only needed the polynomial $r$ to be separable, so that we avoided the possibility of repeated factors. The next step is, as Qiaochu pointed out, specific to Galois extensions. Namely, we can also deduce that the factors $r_i$ are Galois conjugates of each other. Most notably they all have the same degree. In the case of finite fields we can see this more concretely, because we know that the Galois group consists of powers of the Frobenius automorphism $F:x\mapsto x^p$. The zeros of $r$ are $$\alpha,\alpha^p,\alpha^{p^2},\ldots,\alpha^{p^{m-1}}$$ where $\alpha$ is some (fixed) zero of $r$. For example $\alpha=X+\langle r\rangle$. The roots of one of the factors $r_i$ are then lists like $$\alpha^{p^i},\alpha^{p^{i+n}},\alpha^{p^{i+2n}},\ldots$$ because we get such lists of conjugates by applying powers of $F^n$ to one of them. The original list of $m$ roots consisted of a single orbit of the Galois group $G=\langle F\rangle$. This list is now partitioned into orbits of the subgroup $H=\langle F^n\rangle$. Basic facts about actions of cyclic groups tells us that the $H$-orbits all have size $m/\gcd(m,n)$, and that there are $\gcd(m,n)$ of them. Therefore we get $$\Bbb{F}_{p^m}\otimes\Bbb{F}_{p^n}\cong\bigoplus_{i\in D}\Bbb{F}_{p^n}(\alpha^{p^i}),$$ where the set $D=\{0,1,\ldots,\gcd(m,n)-1\}$ consists of representatives of those orbits. It is easy to see that all those fields $$\Bbb{F}_{p^n}(\alpha^{p^i})\cong \Bbb{F}_{p^\ell}$$ with $\ell=\operatorname{lcm}(m,n)$. Summary: $\Bbb{F}_{p^m}\otimes\Bbb{F}_{p^n}$ is isomorphic to a direct sum of $\gcd(m,n)$ copies of $\Bbb{F}_{p^\ell}$ where $\ell=\operatorname{lcm}(m,n)$. In particular, $\Bbb{F}_{p^m}\otimes\Bbb{F}_{p^n}$ is a field if and only if $\gcd(m,n)=1$. • Thank you very much for this detailed explanation. It will take me some more time and reading to fully understand it. But I really appreciate it. The summary was nice too. – Chris Nov 16 '15 at 12:01 • This leads me to a further question. How can I represent or calculate elementary tensors. Let $m=2,n=3,p=3$. If I consider a polynomial representation of $\mathbb{F}_{3^2}$ and $\mathbb{F}_{3^3}$. How can I express or calculate the elementary tensor, lets say $(x+1)\otimes(x^2+1)$? – Chris Nov 16 '15 at 12:06 The tensor product is $\mathbb{F}_p[X, Y]/(r(X), s(Y))$. More generally, tensor products of commutative rings can be computed by "concatenating" their presentations. This tensor product will usually fail to be a field. For example, $\mathbb{F}_{p^n} \otimes \mathbb{F}_{p^n}$ turns out to be the direct product $\prod_{i=1}^n \mathbb{F}_{p^n}$. This is a special case of a more general fact about Galois extensions.
# fexpr.h – flat-packed symbolic expressions¶ This module supports working with symbolic expressions. ## Introduction¶ Formally, a symbolic expression is either: • An atom, being one of the following: • An integer, for example 0 or -34. • A symbol, for example x, Mul, SomeUserNamedSymbol. Symbols should be valid C identifiers (containing only the characters A-Z, a-z, 0-9, _, and not starting with a digit). • A string, for example "Hello, world!". For the moment, we only consider ASCII strings, but there is no obstacle in principle to supporting UTF-8. • A non-atomic expression, representing a function call $$e_0(e_1, \ldots, e_n)$$ where $$e_0, \ldots, e_n$$ are symbolic expressions. The meaning of an expression depends on the interpretation of symbols in a given context. For example, with a standard intepretation (used within Calcium) of the symbols Mul, Add and Neg, the expression Mul(3, Add(Neg(x), y)) encodes the formula $$3 \cdot ((-x)+y)$$ where x and y are symbolic variables. See fexpr_builtin.h – builtin symbols for documentation of builtin symbols. ### Computing and embedding data¶ Symbolic expressions are usually not the best data structure to use directly for heavy-duty computations. Functions acting on symbolic expressions will typically convert to a dedicated data structure (e.g. polynomials) internally and (optionally) convert the final result back to a symbolic expression. Symbolic expressions do not allow embedding arbitrary binary objects such as Flint/Arb/Antic/Calcium types as atoms. This is done on purpose to make symbolic expressions easy to use as a data exchange format. To embed an object in an expression, one has the following options: • Represent the object structurally using atoms supported natively by symbolic expressions (for example, an integer polynomial can be represented as a list of coefficients or as an arithmetic expression tree). • Introduce a dummy symbol to represent the object, maintaining an external translation table mapping this symbol to the intended value. • Encode the object using a string or symbol name. This is generally not recommended, as it requires parsing; properly used, symbolic expressions have the benefit of being able to represent the parsed structure. ### Flat-packed representation¶ Symbolic expressions are often implemented using trees of pointers (often together with hash tables for uniqueness), requiring some form of memory management. The fexpr_t type, by contrast, stores a symbolic expression using a “flat-packed” representation without internal pointers. The expression data is just an array of words (ulong). The first word is a header encoding type information (whether the expression is a function call or an atom, and the type of the atom) and the total number of words in the expression. For atoms, the data is stored either in the header word itself (small integers and short symbols/strings) or in the following words. For function calls, the header is followed by the expressions $$e_0$$, …, $$e_n$$ packed contiguously in memory. Pros: • Memory management is trivial. • Copying an expression is just copying an array of words. • Comparing expressions for equality is just comparing arrays of words. • Merging expressions is basically just concatenating arrays of words. • Expression data can be shared freely in binary form between threads and even between machines (as long as all machines have the same word size and endianness). Cons: • Repeated instances of the same subexpression cannot share memory (a workaround is to introduce local dummy symbols for repeated subexpressions). • Extracting a subexpression for modification generally requires making a complete copy of that subxepression (however, for read-only access to subexpressions, one can use “view” expressions which have zero overhead). • Manipulating a part of an expression generally requires rebuilding the whole expression. • Building an expression incrementally is typically $$O(n^2)$$. As a workaround, it is a good idea to work with balanced (low-depth) expressions and try to construct an expression in one go (for example, to create a sum, create a single Add expression with many arguments instead of chaining binary Add operations). ## Types and macros¶ type fexpr_struct type fexpr_t An fexpr_struct consists of a pointer to an array of words along with a record of the number of allocated words. An fexpr_t is defined as an array of length one of type fexpr_struct, permitting an fexpr_t to be passed by reference. type fexpr_ptr Alias for fexpr_struct *, used for arrays of expressions. type fexpr_srcptr Alias for const fexpr_struct *, used for arrays of expressions when passed as constant input to functions. type fexpr_vec_struct type fexpr_vec_t A type representing a vector of expressions with managed length. The structure contains an fexpr_ptr entries for the entries, an integer length (the size of the vector), and an integer alloc (the number of allocated entries). fexpr_vec_entry(vec, i) Returns a pointer to entry i in the vector vec. ## Memory management¶ void fexpr_init(fexpr_t expr) Initializes expr for use. Its value is set to the atomic integer 0. void fexpr_clear(fexpr_t expr) Clears expr, freeing its allocated memory. fexpr_ptr _fexpr_vec_init(slong len) Returns a heap-allocated vector of len initialized expressions. void _fexpr_vec_clear(fexpr_ptr vec, slong len) Clears the len expressions in vec and frees vec itself. void fexpr_fit_size(fexpr_t expr, slong size) Ensures that expr has room for size words. void fexpr_set(fexpr_t res, const fexpr_t expr) Sets res to the a copy of expr. void fexpr_swap(fexpr_t a, fexpr_t b) Swaps a and b efficiently. ## Size information¶ slong fexpr_depth(const fexpr_t expr) Returns the depth of expr as a symbolic expression tree. slong fexpr_num_leaves(const fexpr_t expr) Returns the number of leaves (atoms, counted with repetition) in the expression expr. slong fexpr_size(const fexpr_t expr) Returns the number of words in the internal representation of expr. slong fexpr_size_bytes(const fexpr_t expr) Returns the number of bytes in the internal representation of expr. The count excludes the size of the structure itself. Add sizeof(fexpr_struct) to get the size of the object as a whole. slong fexpr_allocated_bytes(const fexpr_t expr) Returns the number of allocated bytes in the internal representation of expr. The count excludes the size of the structure itself. Add sizeof(fexpr_struct) to get the size of the object as a whole. ## Comparisons¶ int fexpr_equal(const fexpr_t a, const fexpr_t b) Checks if a and b are exactly equal as expressions. int fexpr_equal_si(const fexpr_t expr, slong c) int fexpr_equal_ui(const fexpr_t expr, ulong c) Checks if expr is an atomic integer exactly equal to c. ulong fexpr_hash(const fexpr_t expr) Returns a hash of the expression expr. int fexpr_cmp_fast(const fexpr_t a, const fexpr_t b) Compares a and b using an ordering based on the internal representation, returning -1, 0 or 1. This can be used, for instance, to maintain sorted arrays of expressions for binary search; the sort order has no mathematical significance. ## Atoms¶ int fexpr_is_integer(const fexpr_t expr) Returns whether expr is an atomic integer int fexpr_is_symbol(const fexpr_t expr) Returns whether expr is an atomic symbol. int fexpr_is_string(const fexpr_t expr) Returns whether expr is an atomic string. int fexpr_is_atom(const fexpr_t expr) Returns whether expr is any atom. void fexpr_zero(fexpr_t res) Sets res to the atomic integer 0. int fexpr_is_zero(const fexpr_t expr) Returns whether expr is the atomic integer 0. int fexpr_is_neg_integer(const fexpr_t expr) Returns whether expr is any negative atomic integer. void fexpr_set_si(fexpr_t res, slong c) void fexpr_set_ui(fexpr_t res, ulong c) void fexpr_set_fmpz(fexpr_t res, const fmpz_t c) Sets res to the atomic integer c. void fexpr_get_fmpz(fmpz_t res, const fexpr_t expr) Sets res to the atomic integer in expr. This aborts if expr is not an atomic integer. void fexpr_set_symbol_builtin(fexpr_t res, slong id) Sets res to the builtin symbol with internal index id (see fexpr_builtin.h – builtin symbols). int fexpr_is_builtin_symbol(const fexpr_t expr, slong id) Returns whether expr is the builtin symbol with index id (see fexpr_builtin.h – builtin symbols). int fexpr_is_any_builtin_symbol(const fexpr_t expr) Returns whether expr is any builtin symbol (see fexpr_builtin.h – builtin symbols). void fexpr_set_symbol_str(fexpr_t res, const char *s) Sets res to the symbol given by s. char *fexpr_get_symbol_str(const fexpr_t expr) Returns the symbol in expr as a string. The string must be freed with flint_free(). This aborts if expr is not an atomic symbol. void fexpr_set_string(fexpr_t res, const char *s) Sets res to the atomic string s. char *fexpr_get_string(const fexpr_t expr) Assuming that expr is an atomic string, returns a copy of this string. The string must be freed with flint_free(). ## Input and output¶ void fexpr_write(calcium_stream_t stream, const fexpr_t expr) Writes expr to stream. void fexpr_print(const fexpr_t expr) Prints expr to standard output. char *fexpr_get_str(const fexpr_t expr) Returns a string representation of expr. The string must be freed with flint_free(). Warning: string literals appearing in expressions are currently not escaped. ## LaTeX output¶ void fexpr_write_latex(calcium_stream_t stream, const fexpr_t expr, ulong flags) Writes the LaTeX representation of expr to stream. void fexpr_print_latex(const fexpr_t expr, ulong flags) Prints the LaTeX representation of expr to standard output. char *fexpr_get_str_latex(const fexpr_t expr, ulong flags) Returns a string of the LaTeX representation of expr. The string must be freed with flint_free(). Warning: string literals appearing in expressions are currently not escaped. The flags parameter allows specifying options for LaTeX output. The following flags are supported: FEXPR_LATEX_SMALL Generate more compact formulas, most importantly by printing fractions inline as $$p/q$$ instead of as $$\displaystyle{\frac{p}{q}}$$. This flag is automatically activated within subscripts and superscripts and in certain other parts of formulas. FEXPR_LATEX_LOGIC Use symbols for logical operators such as Not, And, Or, which by default are rendered as words for legibility. ## Function call structure¶ slong fexpr_nargs(const fexpr_t expr) Returns the number of arguments n in the function call $$f(e_1,\ldots,e_n)$$ represented by expr. If expr is an atom, returns -1. void fexpr_func(fexpr_t res, const fexpr_t expr) Assuming that expr represents a function call $$f(e_1,\ldots,e_n)$$, sets res to the function expression f. void fexpr_view_func(fexpr_t view, const fexpr_t expr) As fexpr_func(), but sets view to a shallow view instead of copying the expression. The variable view must not be initialized before use or cleared after use, and expr must not be modified or cleared as long as view is in use. void fexpr_arg(fexpr_t res, const fexpr_t expr, slong i) Assuming that expr represents a function call $$f(e_1,\ldots,e_n)$$, sets res to the argument $$e_{i+1}$$. Note that indexing starts from 0. The index must be in bounds, with $$0 \le i < n$$. void fexpr_view_arg(fexpr_t view, const fexpr_t expr, slong i) As fexpr_arg(), but sets view to a shallow view instead of copying the expression. The variable view must not be initialized before use or cleared after use, and expr must not be modified or cleared as long as view is in use. void fexpr_view_next(fexpr_t view) Assuming that view is a shallow view of a function argument $$e_i$$ in a function call $$f(e_1,\ldots,e_n)$$, sets view to a view of the next argument $$e_{i+1}$$. This function can be called when view refers to the last argument $$e_n$$, provided that view is not used afterwards. This function can also be called when view refers to the function f, in which case it will make view point to $$e_1$$. int fexpr_is_builtin_call(const fexpr_t expr, slong id) Returns whether expr has the form $$f(\ldots)$$ where f is a builtin function defined by id (see fexpr_builtin.h – builtin symbols). int fexpr_is_any_builtin_call(const fexpr_t expr) Returns whether expr has the form $$f(\ldots)$$ where f is any builtin function (see fexpr_builtin.h – builtin symbols). ## Composition¶ void fexpr_call0(fexpr_t res, const fexpr_t f) void fexpr_call1(fexpr_t res, const fexpr_t f, const fexpr_t x1) void fexpr_call2(fexpr_t res, const fexpr_t f, const fexpr_t x1, const fexpr_t x2) void fexpr_call3(fexpr_t res, const fexpr_t f, const fexpr_t x1, const fexpr_t x2, const fexpr_t x3) void fexpr_call4(fexpr_t res, const fexpr_t f, const fexpr_t x1, const fexpr_t x2, const fexpr_t x3, const fexpr_t x4) void fexpr_call_vec(fexpr_t res, const fexpr_t f, fexpr_srcptr args, slong len) Creates the function call $$f(x_1,\ldots,x_n)$$. The vec version takes the arguments as an array args and n is given by len. Warning: aliasing between inputs and outputs is not implemented. void fexpr_call_builtin1(fexpr_t res, slong f, const fexpr_t x1) void fexpr_call_builtin2(fexpr_t res, slong f, const fexpr_t x1, const fexpr_t x2) Creates the function call $$f(x_1,\ldots,x_n)$$, where f defines a builtin symbol. ## Subexpressions and replacement¶ int fexpr_contains(const fexpr_t expr, const fexpr_t x) Returns whether expr contains the expression x as a subexpression (this includes the case where expr and x are equal). int fexpr_replace(fexpr_t res, const fexpr_t expr, const fexpr_t x, const fexpr_t y) Sets res to the expression expr with all occurrences of the subexpression x replaced by the expression y. Returns a boolean value indicating whether any replacements have been performed. Aliasing is allowed between res and expr but not between res and x or y. int fexpr_replace2(fexpr_t res, const fexpr_t expr, const fexpr_t x1, const fexpr_t y1, const fexpr_t x2, const fexpr_t y2) Like fexpr_replace(), but simultaneously replaces x1 by y1 and x2 by y2. int fexpr_replace_vec(fexpr_t res, const fexpr_t expr, const fexpr_vec_t xs, const fexpr_vec_t ys) Sets res to the expression expr with all occurrences of the subexpressions given by entries in xs replaced by the corresponding expressions in ys. It is required that xs and ys have the same length. Returns a boolean value indicating whether any replacements have been performed. Aliasing is allowed between res and expr but not between res and the entries of xs or ys. ## Arithmetic expressions¶ void fexpr_set_fmpq(fexpr_t res, const fmpq_t x) Sets res to the rational number x. This creates an atomic integer if the denominator of x is one, and otherwise creates a division expression. void fexpr_set_arf(fexpr_t res, const arf_t x) void fexpr_set_d(fexpr_t res, double x) Sets res to an expression for the value of the floating-point number x. NaN is represented as Undefined. For a regular value, this creates an atomic integer or a rational fraction if the exponent is small, and otherwise creates an expression of the form Mul(m, Pow(2, e)). void fexpr_set_re_im_d(fexpr_t res, double x, double y) Sets res to an expression for the complex number with real part x and imaginary part y. void fexpr_neg(fexpr_t res, const fexpr_t a) void fexpr_add(fexpr_t res, const fexpr_t a, const fexpr_t b) void fexpr_sub(fexpr_t res, const fexpr_t a, const fexpr_t b) void fexpr_mul(fexpr_t res, const fexpr_t a, const fexpr_t b) void fexpr_div(fexpr_t res, const fexpr_t a, const fexpr_t b) void fexpr_pow(fexpr_t res, const fexpr_t a, const fexpr_t b) Constructs an arithmetic expression with given arguments. No simplifications whatsoever are performed. int fexpr_is_arithmetic_operation(const fexpr_t expr) Returns whether expr is of the form $$f(e_1,\ldots,e_n)$$ where f is one of the arithmetic operators Pos, Neg, Add, Sub, Mul, Div. void fexpr_arithmetic_nodes(fexpr_vec_t nodes, const fexpr_t expr) Sets nodes to a vector of subexpressions of expr such that expr is an arithmetic expression with nodes as leaves. More precisely, expr will be constructed out of nested application the arithmetic operators Pos, Neg, Add, Sub, Mul, Div with integers and expressions in nodes as leaves. Powers Pow with an atomic integer exponent are also allowed. The nodes are output without repetition but are not automatically sorted in a canonical order. int fexpr_get_fmpz_mpoly_q(fmpz_mpoly_q_t res, const fexpr_t expr, const fexpr_vec_t vars, const fmpz_mpoly_ctx_t ctx) Sets res to the expression expr as a formal rational function of the subexpressions in vars. The vector vars must have the same length as the number of variables specified in ctx. To build vars automatically for a given expression, fexpr_arithmetic_nodes() may be used. Returns 1 on success and 0 on failure. Failure can occur for the following reasons: • A subexpression is encountered that cannot be interpreted as an arithmetic operation and does not appear (exactly) in vars. • Overflow (too many terms or too large exponent). • Division by zero (a zero denominator is encountered). It is important to note that this function views expr as a formal rational function with vars as formal indeterminates. It does thus not check for algebraic relations between vars and can implicitly divide by zero if vars are not algebraically independent. void fexpr_set_fmpz_mpoly(fexpr_t res, const fmpz_mpoly_t poly, const fexpr_vec_t vars, const fmpz_mpoly_ctx_t ctx) void fexpr_set_fmpz_mpoly_q(fexpr_t res, const fmpz_mpoly_q_t frac, const fexpr_vec_t vars, const fmpz_mpoly_ctx_t ctx) Sets res to an expression for the multivariate polynomial poly (or rational function frac), using the expressions in vars as the variables. The length of vars must agree with the number of variables in ctx. If NULL is passed for vars, a default choice of symbols is used. int fexpr_expanded_normal_form(fexpr_t res, const fexpr_t expr, ulong flags) Sets res to expr converted to expanded normal form viewed as a formal rational function with its non-arithmetic subexpressions as terminal nodes. This function first computes nodes with fexpr_arithmetic_nodes(), sorts the nodes, evaluates to a rational function with fexpr_get_fmpz_mpoly_q(), and then converts back to an expression with fexpr_set_fmpz_mpoly_q(). Optional flags are reserved for future use. ## Vectors¶ void fexpr_vec_init(fexpr_vec_t vec, slong len) Initializes vec to a vector of length len. All entries are set to the atomic integer 0. void fexpr_vec_clear(fexpr_vec_t vec) Clears the vector vec. void fexpr_vec_print(const fexpr_vec_t vec) Prints vec to standard output. void fexpr_vec_swap(fexpr_vec_t x, fexpr_vec_t y) Swaps x and y efficiently. void fexpr_vec_fit_length(fexpr_vec_t vec, slong len) Ensures that vec has space for len entries. void fexpr_vec_set(fexpr_vec_t dest, const fexpr_vec_t src) Sets dest to a copy of src. void fexpr_vec_append(fexpr_vec_t vec, const fexpr_t expr) Appends expr to the end of the vector vec. slong fexpr_vec_insert_unique(fexpr_vec_t vec, const fexpr_t expr) Inserts expr without duplication into vec, returning its position. If this expression already exists, vec is unchanged. If this expression does not exist in vec, it is appended. void fexpr_vec_set_length(fexpr_vec_t vec, slong len) Sets the length of vec to len, truncating or zero-extending as needed. void _fexpr_vec_sort_fast(fexpr_ptr vec, slong len) Sorts the len entries in vec using the comparison function fexpr_cmp_fast().
Boost : Subject: Re: [boost] New Boost.XInt Library, request preliminary review From: Scott McMurray (me22.ca+boost_at_[hidden]) Date: 2010-04-02 12:30:02 On 2 April 2010 02:48, Gottlob Frege <gottlobfrege_at_[hidden]> wrote: > > 2. 1/inf = 0 *IS* exact.  It is inexact while approaching inf, but > finally exactly 0 'at' infinity. > Arguably, since we're dealing in round-toward-0 division, even 1/2 is exactly 0. The question I have then is whether at infinity the remainder somehow manages to disappear. Care to shed some light on that? \forall x > 1, divrem(1,x) = (0, 1), so as x -> inf, it'd still be (0, 1). I think the idea of "inexact" zeros came from the idea that 1/0 would give infinity, where you'd then want a "-0" so that 1/-0 can give negative infinity (like in floating point). I think I was advocating that at one point, but have since some to my senses.
# Show {$I,N,N^2$} forms a basis of V iff $N^2 \neq 0$ I came across a problem that I would like to ask you about: Let $N \in Mat_{nxn}(K)$ i.e square matrix, such that $N^{3}=0$, and $A=\lambda I +N$, where $\lambda \in K$. Also, $V$ is a vector space $V=span(I,N,N^{2},N^3,N^4,...)$ I found this set $B=${$I,N,N^2$} to be a generating set of all $V$ since all powers are spanned by B. now I need to prove that B forms a basis if and only if $N^2 \neq 0$. I.E. linear independence needs to be shown, right?: $$a_1I+a_2N+a_3N^2=0 \implies a_1=a_2=a_3=0$$ I'm a bit stuck right now and don't see how to proceed from here, since I don't know much about N. Any hints or tips would be greatly appreciated!. Best - If $a_1 I + a_2 N + a_3 N^2 = 0$, multiply by $N^2$ and use $N^3=0$ to get $a_1 = 0$, then ... -
# How do you find domain and range for y=sqrt(x- 2)? The domain is $x \ge 2$ or $D \left(f\right) = \left[2 , + \infty\right)$ and the range is $y = \sqrt{x - 2} \ge 0 \implies y \ge 0$ or $R \left(f\right) = \left[0 , + \infty\right)$
# ${{\boldsymbol D}_{{s1}}^{*}{(2860)}^{+}}$ MASS INSPIRE search VALUE (MeV) EVTS DOCUMENT ID TECN  COMMENT $2859$ $\pm12$ $\pm24$ 1 2014 AW LHCB ${{\mathit B}^{0}_{{s}}}$ $\rightarrow$ ${{\overline{\mathit D}}^{0}}{{\mathit K}^{-}}{{\mathit \pi}^{+}}$ • • • We do not use the following data for averages, fits, limits, etc. • • • $2866.1$ $\pm1.0$ $\pm6.3$ 36k 2, 3 2012 AU LHCB ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ( ${{\mathit D}}{{\mathit K}}){}^{+}{{\mathit X}}$ at 7 TeV $2862$ $\pm2$ ${}^{+5}_{-2}$ 3122 4, 3 2009 AR BABR ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit D}^{(*)}}{{\mathit K}}{{\mathit X}}$ $2856.6$ $\pm1.5$ $\pm5.0$ 5 2006 E BABR ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit D}}{{\mathit K}}{{\mathit X}}$ 1  Separated from the spin-3 component ${{\mathit D}_{{s3}}^{*}{(2860)}^{-}}$ by a fit of the helicity angle of the ${{\overline{\mathit D}}^{0}}{{\mathit K}^{-}}$ system, with a statistical significance of the spin-3 and spin-1 components in excess of 10 $\sigma$. 2  From the combined fit of the ${{\mathit D}^{+}}{{\mathit K}_S^0}$ and ${{\mathit D}^{0}}{{\mathit K}^{+}}$ modes in the model including the ${{\mathit D}_{{s2}}^{*}{(2573)}^{+}}$, ${{\mathit D}_{{s1}}^{*}{(2700)}^{+}}$ and spin-0 ${{\mathit D}_{{sJ}}^{*}{(2860)}^{+}}$. 3  Possible contribution from the ${{\mathit D}_{{s3}}^{*}{(2860)}}$ state. 4  From simultaneous fits to the two ${{\mathit D}}{{\mathit K}}$ mass spectra and to the total ${{\mathit D}^{*}}{{\mathit K}}$ mass spectrum. 5  Superseded by AUBERT 2009AR. References: AAIJ 2014AW PRL 113 162001 Observation of Overlapping spin-1 and spin-3 ${{\overline{\mathit D}}^{0}}{{\mathit K}^{-}}$ Resonances at Mass 2.86 GeV/$\mathit c{}^{2}$ AAIJ 2012AU JHEP 1210 151 Study of ${{\mathit D}_{{sJ}}}$ decays to ${{\mathit D}^{+}}{{\mathit K}_S^0}$ and ${{\mathit D}^{0}}{{\mathit K}^{+}}$ Final States in ${{\mathit p}}{{\mathit p}}$ Collisions AUBERT 2009AR PR D80 092003 Study of ${{\mathit D}_{{sJ}}}$ Decays to ${{\mathit D}^{*}}{{\mathit K}}$ in Inclusive ${{\mathit e}^{+}}{{\mathit e}^{-}}$ Interactions AUBERT,BE 2006E PRL 97 222001 Observation of a New ${{\mathit D}_{{s}}}$ Meson Decaying to $\mathit DK$ at a Mass of 2.86 GeV/$\mathit c{}^{2}$
My first post was about how to slice-sample from the variance component of linear regression when the half-Cauchy prior is assigned. Enough for the review of what the horseshoe distribution is and let’s move on to what I found interesting while I was reading this paper. The variance component of the linear model could be sampled from its full conditional because the distribution for slice sampling was not degenerate. But if we want to assign a horseshoe prior for each regression coefficient, which essentially shrinks the estimates toward the origin, we can’t use slice sampling. (A short review for what a horseshoe distribution is. Horseshoe distribution does not have a PDF that we can write down but it can be written as a scale mixture of normal, $\begin{array}{rcl}X\,|\,\sigma &\sim& \mathcal{N}\left(0,\sigma^{2}\right)\\ \sigma&\sim& \mathrm{C}^{+}\left(0,1\right) \end{array}$ where $\mathrm{C}^{+}$ denotes the half-Cauchy distribution which is a sort of a truncated distribution. Then, the marginal of $X$ becomes the horseshoe which in fact cannot be represented in terms of a PDF unfortunately.) So what they’ve done is, they found out a hierarchical representation of the half-Cauchy distribution. $\begin{array}{rcl}X^{2}\,|\,a&\sim& \mathcal{IG}\left(\dfrac{1}{2},\dfrac{1}{a}\right)\\ a &\sim& \mathcal{IG}\left(\dfrac{1}{2},\dfrac{1}{A^{2}}\right) \end{array}$ Then, the marginal $X\sim \mathrm{C}^{+}\left(0,A\right)$. Let’s prove this. It’s just integration. $\begin{array}{rcl} p_{X^{2}}\left(x\right) &=& \displaystyle\int_{0}^{\infty} \dfrac{1}{\Gamma(1/2) \sqrt{a}} \left(x^{2}\right)^{-3/2}\exp\left(-\dfrac{1}{ax^{2}}\right) \cdot \dfrac{1}{\Gamma(1/2)A}a^{-3/2}\exp\left(-\dfrac{1}{aA^{2}}\right)\,da\\&=& \dfrac{1}{x^{3}A\pi}\displaystyle\int_{0}^{\infty} a^{-2}\exp\left(-\dfrac{1}{a}\left(\dfrac{1}{x^{2}}+\dfrac{1}{A^{2}}\right)\right)\,da\\ &=& \dfrac{1}{\pi Ax^{3}}\left(\dfrac{1}{x^{2}}+\dfrac{1}{A^{2}}\right)^{-1}\\ &=& \dfrac{1}{\pi A}\left(x+\dfrac{x^{3}}{A^{2}}\right)^{-1} \end{array}$ But this is the density of $X^{2}$ not $X$ so we use change of variable, $U = X$. Then, $U=\sqrt{X^{2}}$ and $U^{2} = X^{2}\implies 2u\,du=dx^{2}$. $\begin{array}{rcl} p_{X}(u) &=& \dfrac{1}{\pi A}\left(u+\dfrac{u^{3}}{A^{2}}\right)^{-1}\cdot 2u\\ &=& \dfrac{2}{\pi A}\left(1+\dfrac{u^{2}}{A^{2}}\right)^{-1} \end{array}$ That is the density function of a half-Cauchy(0,A) distribution. Using such a hierarchical representation simplifies the derivation of Gibbs sampler and thus facilitates the inference. [1] Wand, M. P., Ormerod, J. T., Padoan, S. A., & Fuhrwirth, R. (2011). Mean field variational Bayes for elaborate distributions. Bayesian Analysis, 6(4), 847-900.
# Easy integral? #### Drexel28 MHF Hall of Honor Compute $$\displaystyle \int_0^{2\pi}e^{\cos(x)}\cos\left(x+\sin(x)\right)dx$$ #### simplependulum MHF Hall of Honor Compute $$\displaystyle \int_0^{2\pi}e^{\cos(x)}\cos\left(x+\sin(x)\right)dx$$ Is the integrand the derivative of $$\displaystyle e^{\cos(x)} \sin( \sin(x) )$$ ? #### tonio Is the integrand the derivative of $$\displaystyle e^{\cos(x)} \sin( \sin(x) )$$ ? $$\displaystyle (e^{\cos(x)} \sin( \sin(x) ))'=-\sin x \,e^{\cos x}\sin(\sin x)+e^{\cos x}\cos(\sin x) \cos x$$ $$\displaystyle =e^{\cos x}\left(\cos(\sin x)\cos x-\sin x\,\sin(\sin x)\right)=$$... $$\displaystyle e^{\cos x}\cos(x+\sin x)$$ !! Oh, dear hollie mollie: yes, it is! A question: how did you come up with it? Some program or some insight? Tonio #### simplependulum MHF Hall of Honor I write $$\displaystyle e^{\cos(x)} \cos( x + \sin(x) )$$ $$\displaystyle = Re[ e^{\cos(x)}e^{ix} e^{i\sin(x)} ]$$ $$\displaystyle = Re[ e^{e^{ix}} e^{ix} ]$$ then substitute $$\displaystyle u = e^{ix}$$ At first i did it by integration by parts but the steps are quite long so i was not sure at that time ... Now , i am sure it is true with your help .(Happy) Bruno J. #### Drexel28 MHF Hall of Honor $$\displaystyle (e^{\cos(x)} \sin( \sin(x) ))'=-\sin x \,e^{\cos x}\sin(\sin x)+e^{\cos x}\cos(\sin x) \cos x$$ $$\displaystyle =e^{\cos x}\left(\cos(\sin x)\cos x-\sin x\,\sin(\sin x)\right)=$$... $$\displaystyle e^{\cos x}\cos(x+\sin x)$$ !! Oh, dear hollie mollie: yes, it is! A question: how did you come up with it? Some program or some insight? Tonio Just tinkering I write $$\displaystyle e^{\cos(x)} \cos( x + \sin(x) )$$ $$\displaystyle = Re[ e^{\cos(x)}e^{ix} e^{i\sin(x)} ]$$ $$\displaystyle = Re[ e^{e^{ix}} e^{ix} ]$$ then substitute $$\displaystyle u = e^{ix}$$ At first i did it by integration by parts but the steps are quite long so i was not sure at that time ... Now , i am sure it is true with your help .(Happy) There is another method that involves complex analysis, and it is the way I was hoping someone would do this. We know that $$\displaystyle \oint_{|z|=1}e^zdz=0$$ and thus $$\displaystyle \int_0^{2\pi}e^{e^{it}}\left(e^{it}\right)'dt=\int_0^{2\pi}e^{\cos(t)}\left(\cos(\sin(t))+i\sin(\sin(t))\right)\left(-\sin(t)+i\cos(t)\right)=0$$, in particular the real an imaginary parts of that integral must equal zero. Do a little work with trig identities and you will get our integral is the imaginary part of it. #### Random Variable let $$\displaystyle I(a) = \int^{2 \pi}_{0} e^{a \cos x} \cos (x + a \sin x) \ dx = Re \Big(\int^{2 \pi}_{0} e^{ae^{ix}}e^{ix} \ dx \Big)$$ then $$\displaystyle I'(a) = Re \Big(\int^{2 \pi}_{0} e^{2ix}e^{ae^{ix}} \ dx\Big)$$ let $$\displaystyle u = ae^{ix}$$ EDIT: then $$\displaystyle I'(a) = Re \Big( -\frac{i}{a^{2}} \int^{a}_{a} ue^{u} \ du \Big) = Re(0) = 0$$ so $$\displaystyle I(a) = C$$ but $$\displaystyle I(0) = 0$$ which means $$\displaystyle C=0$$ so $$\displaystyle I(a) = 0$$ and $$\displaystyle \int^{2 \pi}_{0} e^{\cos x} \cos (x + \sin x) \ dx = I(1) = 0$$ Last edited:
latextools .sublime-project texroot does identify a texroot I am attempting to build a latex project with included files using LatexTools in sublime text 3. I cannot build a pdf from a subfile. Please tell me how to set up a sublime project to enable pdf building by using ctrl+b while having the editor opened on a child tex file. Below is a minimal example: qsar_catalogue.tex: \documentclass{article} \title{Exploratory review of QSARs in Toxicology} \begin{document} \include{sections/3_methods} \end{document} sections/3_methods: \section{Methods} test This stack exchange question mentions that a root tex file can be created by putting the below in your .sublime-project file: { "settings": { "TEXroot": "./yourfilename.tex" } } This should enable tex building from tex files that are included via a \include command. I believe I have done this correctly. If I run check system I get the below key/field: TeX Root -------- /home/my_user/path/to/qsar_catalogue.tex However when I try to build from /home/my_user/path/to/sections/3_methods.tex I get: entering extended mode (/home/my_user/path/to/sections/methods.tex LaTeX2e <2016/02/01> Babel <3.9q> and hyphenation patterns for 81 language(s) loaded. ! Undefined control sequence. l.1 \section {Methods} ? I found more instructions at latextools documentation but after a few hours of changing various settings I had no luck. • That does not look like the output from the LaTeXTools build system. Can you try to press ctrl+shift+b and select LaTeX. In addition you can try to check your builder settings in the file Preferences > Package Settings > LaTeXTools > Settings - User Jul 11, 2017 at 21:28
13-18 May 2018 Casino Conference Centre Europe/Prague timezone ## Detection of 2017 ruthenium-106 fallout in grass in Northern Czechia 15 May 2018, 17:15 1h 30m Gallery (Casino Conference Centre) ### Gallery #### Casino Conference Centre Reitenbergerova 4/95, Mariánské Lázně, Czech Republic ### Speaker Dr Daniela Pittauer (Institute of Environmental Physics, University of Bremen, Germany) ### Description Traces of radioactive isotopes of ruthenium in the atmosphere were reported in the beginning of October 2017 by several European stations monitoring the airborne concentrations of gamma emitters (e.g., IRSN, 2017). As of February 2018, the source in Eastern Europe has not been publically identified. $^{106}$Ru is a fission product with a half-life of 371.5 days. It is used as a medical isotope and despite its relatively short half-life, also its use as radionuclide thermoelectric generator was suggested (IRSN, 2017). $^{106}$Ru values, up to dozens of mBq•m$^{-3}$ have been detected in Czech stations (SURO, 2017), where also the isotope $^{103}$Ru was detected in levels 3−4 orders of magnitude lower. In October 2017 we collected grass samples from four stations in Liberec region in northern Czechia. The samples were taken in order to test, if radioisotopes related to wet deposition during the period of positive $^{106}$Ru atmospheric detection, can be measured in detectable amounts and whether a deposition ratio could be derived. Grass is an environmental medium, in which radionuclide fallout would be concentrated after emission followed by wet deposition. Grass is at the same time an important part of the terrestrial food chain. For both reasons, grass is one of the first environmental media to be investigated in emergency plans. Our grass samples contained $^{106}$Ru detectable by standard gamma spectrometric procedures in the range of hundreds of mBq•m$^{-2}$. The observed activities of ruthenium and natural radionuclides were evaluated using a simple deposition model in context of the reported airborne activities and meteorological conditions. Acknowledgements We thank to citizen scientists Rudolf Makeľ and Marie Matoušková for collecting the grass samples, and to Miroslav Kudrna and the Technical College for Economy and Forestry in Frýdlant for the data from their meteorological station. References IRSN, 2017, Detection of ruthenium 106 in the air in the east and southeast of Europe - Update of October 9, 2017, accessed 07.11.2017, http://www.irsn.fr/EN/newsroom/News/Pages/20171009_Detection-of-ruthenium-106-in-the-air-in-the-east-and-southeast-of-Europe.aspx#1 SURO, 2017. Czech National Radiation Protection Institute, Information on ruthenium occurrence in the atmosphere (In Czech: Informace o výskytu ruthenia v ovzduší), accessed 07.11.2017, https://www.suro.cz/cz/publikace/aktuality/informace-o-vyskytu-ru-106-v-ovzdusi ### Primary authors Dr Daniela Pittauer (Institute of Environmental Physics, University of Bremen, Germany) Mrs Maria Evangelia Souti (Institute of Environmental Physics, University of Bremen, Germany) Dr Helmut W. Fischer (University of Bremen) ### Presentation Materials There are no materials yet. ###### Your browser is out of date! Update your browser to view this website correctly. Update my browser now ×
## My group, Your group, or Our group Tags: , , Posted in PhD life, politics, Tips In science the dilemma of either cooperating or competing is everywhere. The situation is never black or white and depends on the discipline. In this post I will limit myself to the typical small-science group model: one group leader, one or two postdocs and a number – typically between 4 and 6, of PhD students. Pressure All the group member are under pressure. PhD students have to finish their thesis in time, with preferably a couple of first-author articles in glossy magazines. On the level of PhD students there is already possibly competition if the work of PhD students overlap either with respect to subject or when equipment is shared. The postdoc’s first aim is to get at some academic place a tenure track position. He needs papers. The PhD students might not want him on their papers, or the other way around. And then the group leader. He is competing all the way. With other group leaders. Both locally and internationally. He wants to be promoted to full professorship. His ambition is an endowed chair. Or an invitation to become a full professor at a renowned institute. Or he just wants a higher salary. The subject of this post is to discuss how a group leaders present the group he leads  to the outside world in general and in particular to his competitors. Let us call the scientist Mary Johnson leading a group called Nano Biodevices based at the University of California Santa Barbara. How should she refer to this group: • My group • Our group • The Johnson group US style When I listen to US scientists, it is quite clear. They will talk about “my group”, “my lab”, “my postdoc” and “my PhD student”. When non-scientists hear this possessive scientist talking they might think that slavery is not yet abolished in science. Anyway Mary will talk about “my group”. It is clear that if Mary would refer to the group as “The Johnson group” she should go in therapy. Some of the “my group” group leaders do not realize that they use this terminology. However others, specially those at famous  institutes were postdoc candidates and candidates for PD positions are lining up, consider their group members as a disposable workforce. The scientific results of the whole group should be attributed solely to the leader of “my group”. In my opinion the act of group leaders referring to their group as “my group”  is an insult to all group members. As far as I know I have never did it. I will refer to the group of which I am the  group leader as to  “our group”. Sounds so much better and so much closer to the truth. Other people talking referring to the group During conference presentations speakers might want to refer to results obtained by the group Nano Biodevices. Problem of course is that these hyped-up names are not useful in a scientific discussion. Referring to the group as “Mary Johson’s group” is a practice I do not like that. I think “the group of Mary Johnson” is slightly better. Even better is “the UCSB group”. Junior scientists If you have choice between various groups to join, check if you can find out how they refer to the group they are leading. This observation might help you in making the choice. - - - - - - If you like this post why don't you email subscribe to our new posts. Or subscribe to our RSS feed. 1. 14 Sep 2012 10:22, Gijs van Soest Ad, I share your European sensitivity for possessive language. However, I have worked with American junior scientists, even graduate students, who would refer to their home base as “my lab” or “my group”. So this appears to be pretty common parlance, not necessarily implying hierarchy. 2. 16 Sep 2012 11:06, Mirjam What bugs me more than the usage of ‘my group’ is the people that forget to mention that other people were involved at all (group members or collaborators), presenting everything as their own ideas and work. I suspect ‘my group’ is meant in a different way than it may sound to people from a different country/culture/language… And we are talking about subtleties here: I have no problem talking about a ‘PhD student in my group’, but will rarely say ‘my PhD student’, because you indeed don’t own people but you do build up a group (i.e. your own ‘business’) with your own ideas, money you bring in and people you select. The use of e.g. ‘Lagendijk Lab’ probably is also a matter of convenience, because you immediately know which group is meant (I will frequently know the author names, but not the place they work at) and science in the end is very much centered around the ego’s of the scientists. To change the usage of group indicators one needs to change the way the scientific world works right now… 3. 19 Sep 2012 19:59, Philip Chimento I’m not sure there’s really a functional difference between “Mary Johnson’s group” and “the group of Mary Johnson.” To my native English-speaking ears, the former does not sound more possessive than the latter. The latter does sound, however, unlikely to be used by native speakers. Is it just me or what do other native speakers think? 4. 24 Sep 2012 14:52, Mirjam The problem is not in the choice of wording but in the attitude of the speaker… 5. 3 Feb 2013 16:05, Shan “Our” group is the only way to go. This is of particular importance when interviewing for a new position. Leaders that say “my this” and “my that” will behave in the same way at their new university, owning everything they see. As for presentations. it is always good to have a small picture of the main drivers of a project in the top right hand corner of the slide. Saying their name is nice, but pictures are far better. 6. 6 Jul 2013 8:17, praha little more that my group happens when there are much more researchers (post doc, senior post doc, PhD students, Undergrad project students) under a leader (PROFESSOR), then also comes the internal group ‘A’s group versus”B” group and competition within the group……Imagine the competition happening there!! XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> * By submitting a comment here you grant this site a perpetual license to reproduce your words and name/web site in attribution. Subscribe without commenting
Now you can Subscribe using RSS # IBPS PO 2017 Reasoning Ability practice set 6 IBPS EXAM Guru Directions (1-2): Study the following information to answer the given questions: In certain code ‘always to be right’ is written as ‘4 9 3 2,’ ‘right is also just’ is written as ‘9 7 6 5’, ‘come to terms’ is written as ‘1 3 8’, ‘terms are just’ is written as ‘0 1 6’ and ‘always is’ is written as ‘7 4’. 1 . Which of the following is the code for ‘right’? A.   9 B.   7 C.   6 D.   5 2 . What does ‘6’ represent in this code? A.   terms B.   also C.   are D.   is 3 . Directions (3-7): In each question below are given some statements followed by two conclusions numbered I and II. You have to take the three given statements to be true even if they seem to be at variance with commonly known facts and then decide which of the given conclusions logically follows from the given statements disregard ing commonly known facts. Give answer Statements: Some tigers are panthers. All lions are tigers. Conclusion: I. Some panthers, which are not tigers, are also lions. II. All panthers, which are not tigers, are also not lions. A.   if only conclusion I follows B.   if only conclusion II follows C.   if either conclusion I or II follows D.   if neither conclusion I nor II follows 4 . Statements: Some motors are scooters. No scooter is vehicle. Conclusion: I. There is possibility that a motor will never be a vehicle. II. A few vehicles may not be scooters. A.   if only conclusion I follows. B.   if only conclusion II follows. C.   if either conclusion I or II follows. D.   if neither conclusion I nor II follows 5 . Statements: All stars are planets. No planet is a moon. Conclusion: I. There is a possibility of stars being a moon. II. At least moons are not planets. A.   if only conclusion I follows. B.   if only conclusion II follows. C.   if either conclusion I or II follows D.   if neither conclusion I nor II follows 6 . Statements: All roses are flowers. Some roses are red. Conclusion: I. All flowers being red is a possibility. II. Many flowers are not red A.   if only conclusion I follows. B.   if only conclusion II follows C.   if either conclusion I or II follows D.   if neither conclusion I nor II follows 7 . A.   if only conclusion I follows B.   if only conclusion II follows C.   if either conclusion I or II follows D.   if neither conclusion I nor II follows 8 . Directions: (8-10) Study the following information to answer the given questions. Twelve people are sitting in two parallel rows containing six people each, in such a way that there is an equal distance between adjacent persons. In row - 2 P, Q, R, S, T and V are seated and all of them are facing north. In row-1, A, B, C, D, E and F are seated and all of them are facing south. Therefore, in the given seating arrangement each member seated in a row faces another member of the other row. A sits third to the right of D. Neither A nor D sits at extreme ends. T faces D. V does not face A and V does not sit at any of the extreme ends. V is not on immediate neighbour of T. B sits at one of the extreme ends. Only two people sit between B and E. E does not face V. Two persons sits between R and Q. R is not an immediate neighbour of T. C doesnot face V. P is not an immediate neighbour of R. Who amongst the following sit at extreme ends of the rows? A.   B.E B.   s,t C.   p,r D.   b,f 9 . Who amongst the following faces A? A.   R B.   T C.   P D.   S 10 . How many persons are seated between T and S? A.   one B.   Two C.   Three D.   None of these 1 . Answer : Option A Explanation : The code for ‘right’ is ‘9’ 2 . Answer : Option D Explanation : The number ‘6’ represents ‘just’ 3 . Answer : Option D Explanation : All lions are tigers + Some tigers are panthers. A + I = No conclusion. Hence neither conclusion I nor II follows 4 . Answer : Option A Explanation : Some motors are scooters + No scooter is a vehicle = I + E = O = Some motor are not vehicle. Hence there is possibility that a motor will never be a vehicle. Conclusion I follows.No scooter is a vehicle conversion No vehicle is scooter conclusion II does not follows. 5 . Answer : Option B Explanation : All stars are planets + No planet is a moon = A + E = E = No stars are moon. Conclusion I does not follow. No planet is moon conversion Some moon are not planets. Conclusion II follows. 6 . Answer : Option A Explanation : Some roses are red conversion Some reds are roses + All roses are flowers = I + A = I = Some reds are flowers = So all flowers being red is a possibility conclusion I follows. But some red are flower conversion Some flower are not red, is a possibility. Hence conclusion II does not follows 7 . Answer : Option B Explanation : All institutes are academics + All academics are school = A + A = A = All institutes are school. Some institutes are banks conversion Some banks are institutes + All institutes are academies = I + A = I = Some banks are academics. It means there is a possibility: All academics beings bank is a possibility. Conclusion II follows. But All institutes are academics + All academics are school = All institutes school, conclusion I does not follows 8 . Answer : Option C Explanation : 9 . Answer : Option D Explanation : S-A 10 . Answer : Option B Explanation : Two
## California Governor’s Office releases 2013 ZEV action plan; 1.5M ZEVs on CA roadways by 2025 ##### 07 February 2013 California Governor Jerry Brown’s Office and state agencies issued a 2013 Zero-emission Vehicle (ZEV) Action Plan. The Action Plan follows on Governor Brown’s Executive Order (B-16-2012) released March 2012, which set required milestones for state government to enable 1.5 million zero-emission vehicles on California roadways by 2025. (Earlier post.) The Action Plan details concrete actions that state agencies are taking to help accelerate the market for plug-in electric vehicles and fuel cell electric vehicles. For the purposes of the executive order and action plan, ZEVs include hydrogen fuel cell electric vehicles (FCEVs), battery electric vehicles (BEVs), and plug-in hybrid electric vehicles (PHEVs). They also address light-duty passenger vehicles and heavier vehicles such as freight trucks and public buses. The action plan—which will be adjusted over time to address changing market conditions—is the product of an interagency working group led by the Governor’s Office that includes several state agencies and associated entities and builds upon significant work already undertaken by these agencies. The action plan also benefits from input from outside stakeholders, including the California Plug-in Electric Vehicle Collaborative (PEVC) and the California Fuel Cell Partnership (CaFCP). The Governor’s Executive Order specifically directs collaboration with these two organizations. The Executive Order established several milestones organized into three time periods: by 2015, by 2020, and by 2025. The Executive Order also directs state government to begin purchasing ZEVs. In 2015, 10% of state departments’ light-duty fleet purchases must be ZEVs, climbing to 25% of light duty purchases by 2020. The Action Plan outlines actions grouped under four broad goals that state government is currently taking or plans to take to help expand the ZEV market: 1. Complete needed infrastructure and planning 2. Expand consumer awareness and demand 3. Transform fleets 4. Grow jobs and investment in the private sector  Complete needed infrastructure and planning. This action plan is intended to help provide sufficient infrastructure to support up to 1 million ZEVs by 2020. Further actions beyond 2020 will likely be necessary to reach the Executive Order’s target of 1.5 million vehicles by 2025, the plan notes. Due to the changing nature of the ZEV market, the action plan does not address infrastructure and planning-related actions after 2020. The 45 detailed actions under this goal are grouped into 13 areas: 1. Provide crucial early funding for ZEV charging and fueling infrastructure. 2. Support ZEV infrastructure planning and investment by public and private entities. 4. Ensure pricing transparency for ZEV charging and fueling. 5. Expand appropriate ZEV-related signage on highway corridors and surface streets. 6. Support local government efforts to prepare communities for increased PEV usage and the coming commercialization of FCEVs. 7. Ensure that hydrogen and electricity can legally be sold as a retail transportation fuel. 8. Make it easier to locate and install public PEV infrastructure. 9. Ensure a minimum network of hydrogen stations for the commercial launch of fuel cell vehicles between 2015 and 2017. 10. Streamline permitting of hydrogen stations. 11. Plan for and integrate peak vehicle demand for electricity into the state’s energy grid. 12. Establish consistent statewide codes and standards for ZEV infrastructure. 13. Coordinate with other “Section 177 states” that have adopted California’s ZEV mandate to learn from each other’s innovations and enable a seamless consumer experience for ZEV drivers across the country. Expand consumer awareness and demand. The action plan includes several strategies to help expand consumer awareness and interest in ZEVs, including reducing upfront purchase and operating costs, promoting consumer awareness and strengthening the connection between ZEVs and renewable energy. The 30 actions are grouped into seven areas: 1. Reduce up-front purchase costs for ZEVs. 2. Encourage and support auto dealers to increase sales and leases of ZEVs. 3. Reduce operating costs for ZEVs. 4. Develop and maintain attractive non-monetary incentives for use of ZEVs. 5. Strengthen connections between research institutions and auto makers to better understand how ZEVs are being used. 6. Promote consumer awareness of ZEVs through public education, outreach and direct driving experiences. 7. Provide plug-in vehicle (PEV) drivers with options to connect PEV charging with energy efficiency and renewable energy. Transform fleets. The Governor’s Executive Order aims to expand ZEVs in both public and private vehicle fleets. The order specifically directs DGS and state departments to increase the share of ZEVs in their own fleets through the normal course of fleet replacement. The action plan also calls for expanded ZEV deployment within private vehicle fleets, including public transportation and freight transport. The plan identifies a range of actions that state government should take to encourage increased ZEV deployment in private fleets including providing funding support, keeping fueling affordable, and increasing coordination and communication among fleet users. For both state and private fleet, the plan outlines 30 actions grouped into 10 areas: 1. Incorporate ZEVs into state vehicle fleet. 2. Identify funding to expand fleet purchases of ZEVs and ZEV infrastructure. 3. Track benefits of fleets’ transition to ZEVs to the extent practicable. 4. Complete necessary infrastructure to allow for 10% ZEV purchases by 2015. 5. Maximize use of ZEVs in state-sponsored car rentals. 6. Ensure that state vehicles can benefit from evolving benefits associated with ZEVs and position state vehicle fleets to participate in technology demonstrations. 7. Expand use of ZEVs for private light- and medium-duty fleets. 8. Help to expand ZEVs within bus fleets. 9. Reduce cost barriers to ZEV adoption for freight vehicles. 10. Integrate ZEVs into freight planning. Grow jobs and investment in the private sector. While state government continues to provide publicly funded financial incentives to expand the consumer market for ZEVs, the state’s actions are intended ultimately to build a ZEV market that is sustainable without public subsidies through growing consumer demand and private investment. The plan outlines 15 actions grouped into four areas: 1. Leverage tools to support business attraction, retention and expansion of ZEV companies. 2. Support demonstration and commercialization of ZEV-related technologies by California companies. 3. Support R&D activities at California universities and research institutions. 4. Prepare California workers to participate in ZEV-related jobs. Resources A very comprehensive plan brought to you by the state asking "Who Killed the Electric Car". It doesn't appear that Bush/GM can crush this "..10% ZEV purchases by 2015", though the spots and flocks remain the same http://jalopnik.com/5979658/bailed-out-gm-executives-got-excessive-pay-according-to-watchdog?tag=Inside-8-Mile http://abcnews.go.com/Blotter/story?id=7208201&page=1 Kelly, you need to get that BDS looked at Herm, you need to note history. A fired GM CEO is collecting $20 million as we comment. Another Florida 'chad' problem and the pension "turn around"/party calling Tesla, with a 10,000 COTY order backlog, a loser could be overriding this article. ZEVs could be further delayed and the US launched into another "Hydrogen Initiative" - with many, many$billions spent and not one FC car marketed. Detroit also discussed this California law: http://www.detroitnews.com/article/20130131/AUTO01/301310351 "The original "zero emission" mandate, set in 1990, would have required the six largest automakers to produce 10 percent of their combined fleet as zero-emission vehicles by 2003, but that was scrapped after automakers fought it." "In March 2008, the board required the automakers to sell a combined 7,500 zero-emission vehicles between 2012 and 2014, down from 25,000 vehicles." Pretend this were a non-corrupt industry and CA required .03 percent of the new products sold in CA to be non-polluting - but the industry would rather attack the law and watch Californians(customers) die for decades from industry product pollution. Especially in LA, the people know the air they breathe and the thousands of times car cost that ICE respiratory disease, healthcare, and death inflicts. Imagine 'company stores' that only sell or service a THIRD of their (ex: Ford EVs) non-polluting products, yet the people should cheer. This seems historic. Wasn't there something about EV and NON-EV water fountains, seating areas, and such..or was that the benefits of tobacco smoking? No what Bush did was note that its best to get the heavy fuel uses over to something new and better then it is to just focus on the small segment that uses very little fuel at all. Thats the part that fuel cells play. To replace heavy systems. Things with engines bigger then your house will need something to run on other then oil. Things with engines as big as your bathroom also will need something other then oil to run on. Bush didnt need to worry about the small stuff as idiots were tunnel visioned onto it and were forgetting all the rest. He just had to deal with the other 94% or so of the fuel use they were missing.. No big deal. Sometimes I wonder just how stupid humanity can manage to be.. @w2k, light vehicles ARE most oil use and the Bush Hydrogen Initiative is pure market failure. Bush states(ten years ago) that one's first car will be fuel cell powered. http://www.hydrogen.energy.gov/h2_fuel_initiative.html http://articles.cnn.com/2003-02-06/politics/bush-energy_1_hydrogen-power-fuel-cells-dependence-on-foreign-oil?_s=PM:ALLPOLITICS Kelly even in the light duty market most of the fuel used is in cars and trucks that will not be serviced by batteries alone for a very long time because they simply require far too much total energy. Simply put they gobble energy. This means that while say 3% of the market might be bev by 2025 it wont make up 3% of the fuel used or even 1%. But if 3% of the market by then is also fuel cells its very likely more then a combined 4% of the fuel will be replaced by eltric and hydrogen sources. I dont wana wait the time it would take for batteries alone to replace enough fuel use to make a real difference. w2k, EVs(90% efficient) use less than 1/3 the energy of ICE(20 to 30%). Already, 20 mile range plug-in EVs can eliminate most commute light gas use(20 X 365 = 7300 miles annual) with just one daily charge. We will see soon enough. As soon as we have enough of both kinds of cars on the road we should start to get a real world look at what impact they actualy have. My bet right now is even with bev and fc and some others we will still fail to do all that well this side of 2030. @wintermane2000 It would appear that you have kelly's number. Bush was able to take the long view on energy and look 50 years down the road when we might need many alternatives. Kelly is wrong on so many levels. This is because kelly is stuck in the past with some petty political agenda and is too lazy to do the work. If one would take the time to read the DOE link they could learn that the focus HFCV was city buses in the time frame of 2015 to determine if commercially viable. That sounds like a pretty good plan and has nothing to do with BEV which are a different market. Two plans are better than one unless you are kelly and only like one silly plan that will not work. We should look at this in the context of 2003 but many things have changed since then. First air pollution has improved in cities and is no longer a health threat. Such things as low sulfur diesel fuel. California is trying to fix a problem that has been fixed. The second thing that has changed is the increase in US oil production. We still need to look 50 years down the road but we have added 20 years to when we might actually have to relie on alternatives. Third, one alternative has actually been proven. My old truck has been running on E10 for more than 6 years now. I am a little disappointed in the progress biodiesel has made. LCA clearly shows that it has the potential for reducing environmental impact of transportation fuel. @Kit P " is too lazy to do the work" of reading the links to reality provided by a dozen other comments and, for months, too wrong to find links supporting his nonsense. Bush Nearly Turns a Hydrogen Car into the Hindenburg For you, Kit P, there is even a picture! "Credit Ford Motor Co. CEO Alan Mulally with saving the leader of the free world from self-immolation. Mulally told journalists at the New York auto show that he intervened to prevent President Bush from plugging an electrical cord into the hydrogen tank of Ford's hydrogen-electric plug-in hybrid at the White House last week." http://www.democrats.com/bush-hydrogen-car I happen to like Bush. Thought he was a good POTUS. However, maybe Ford Motor Co. CEO Alan Mulally belongs in jail. If you deliver a product that could get people killed if they make a mistake, then you criminally neglegent. I always thought it was funny how idiots would try to make fun of Bush. Like pointing out that he was a fighter pilot in the National Guard. Not getting yourself killed becoming a fighter pilot is an accomplishment few can match. Personally am not too impressed with being a activist in Chicago. @Kit P, no one doubts you like the US President, who's final eight year approval rating was 20%, only to have VP Cheney hated more, with 13% approval while they handed out no WMD wars, no-bid war contracts, stalled ZEVs, etc. I am honorably discharged from the US Army. Jet pilot Bush(stripped of flight status) was at Yale in 1972, but 'stationed' in Colorado, and claimed attending monthly meetings in Alabama, BUT A YEAR PASSED W/O ANY ATTENDANCE OR A SINGLE ATTENDANCE RECORD. If one in the military is AWOL for months, they are deserters. W is a DESERTER. http://www.awolbush.com/ Historians easily consider the Bush administration the most corrupt in US history. @Kit P, no one doubts you like the US President, who's final eight year approval rating was 20%, only to have VP Cheney hated more, with 13% approval while they handed out no WMD wars, no-bid war contracts, stalled ZEVs, etc. I am honorably discharged from the US Army. Jet pilot Bush(stripped of flight status) was at Yale in 1972, but 'stationed' in Colorado, and claimed attending monthly meetings in Alabama, BUT A YEAR PASSED W/O ANY ATTENDANCE OR A SINGLE ATTENDANCE RECORD. If one in the military is AWOL for months, they are deserters. W is a DESERTER. http://www.awolbush.com/ Historians easily consider the Bush administration the most corrupt in US history. “no one doubts you like the US President ” I am not too impressed with the current one. I judge each POTUS what they do not what a few loons thing of him. “I am honorably discharged from the US Army. ” Not too impressed with that either. Did you accomplish anything and what did you learn? I was a nuclear trained navy officer and used my experience to work in the power industry. While my accomplishment does not demean others, kelley spends a lot of time disrespecting a retired POTUS. The same event can be viewed in different ways. When Bush was on the White House lawn with future technologies, I saw someone who took the time to think about the future of our country. It takes a wild stretch of credibility to think Bush was trying to set himself on fire. "Tricky Dick" Nixon, the only crooked US President fully forced to resign, was a Navy officer also. To reiterate: US, Japan, and German nuclear power plants have been shut down or further permits refused. 3. Japan implements generous feed-in tariff, sparks huge solar power growth. It installed 725 MW of non-residential solar photovoltaic (PV) systems and 306 MW of residential solar PV systems in just in July and August. 2. Australia — hidden decentralized solar giant — sees strong solar growth and better than grid parity solar prices. In October, Giles noted that “Australia now sports a rooftop array on one out of every 10 households.” That figure is 1/5 in South Australia. 1. Germany’s wicked growth and new records. 2012-05-25-Germany-PV-Solar-Record Published May 30, 2012 at 1000 × 800 in In-Depth: Germany’s 22 GW Solar Energy Record Be glad it's our ALLIES making these pollution-free, economic electric power gains supplying tens of millions of households.. Concerning Bush, please read the below report(32 pages) - military man to military man: "By way of background, I am a retired (1999) Army colonel with active Marine enlisted service (1967-69)..This analysis concluded that Bush failed to fulfill faithfully and fully the solemn obligation he accepted when he enlisted in the Texas ANG (TXANG) in 1968.. Moreover, he received fraudulent payments for INACDUTRA....." http://www.nytimes.com/packages/pdf/opinion/lechliter.pdf “I am a retired (1999) Army colonel ” Very good but what happened to the 'degreed, experienced electrical engineer'. I am correct that kelly has no experience in the power industry. What I find sad about kelly now that we have heard about the Army colonel thing is the remarkable resemblance to to a high school debater. The complete lack of wisdom that comes with age. Not only did I vote for Bush twice, I really liked his energy policy. I read it and have on the bookshelf in front of me. I have also read the California energy policy that took five years longer to produce. For California, wind and solar are shiny things to distract folks like kelly from realizing how much natural gas they are burning. “Be glad it's our ALLIES making these pollution-free, economic electric power gains supplying tens of millions of. ” I am not glad because solar is not pollution-free. Try reading a few LCA. They are not economical. They they supply zero households families with power. The power industry supplies customers power 24/7. Solar can supply some electricity on nice days when it is needed the least. @Kit P, I am trying to be polite. “I am a retired (1999) Army colonel ” Col. Lechliter wrote http://www.nytimes.com/packages/pdf/opinion/lechliter.pdf , 32 pages, which you were asked to please read AND PROVES Bush is a DESERTER AND THIEF. Do you know ANYTHING about electricity? "Germany’s 22 GW Solar Energy Record" alone is the power of ~20 nuclear power plants. You appearently don't realize how stupid and out of date you are, so I'm including US Navy nuclear power cost figures: "The newly calculated life-cycle cost break-even cost-ranges, which supercede the break-even cost figures from the 2005 NR quick look analysis, are as follows: —$210 per barrel to$670 per barrel for a small surface combatant;.." http://www.fas.org/sgp/crs/weapons/RL33946.pdf I'm an EE/MBA with over forty years experience and you ignore facts. Read what you write, or at least what others have, esp. Update 1:.. My over forty years of experience is in producing power. To bad kelly will not tell us what kind of experience. Every old person has forty years experience, so what? Maybe kelly was an MBA at ENRON with lots of skill at cooking the books. “—$70 per barrel to$225 per barrel for a medium-size surface combatant” It would appear that kelly does not understand the difference between stationary power plants and various kinds of navy ships. One ot the things that sets our navy apart is the ablity to refuel underway. However, during underway refueling ships are at their most vunerable. Nuke ships do not have to refuel when responding at fuel speed to a crisis. For the record, I served on the USS Texas and USS Virginia which were medium-size surface combatants. “alone is the power of ~20 nuclear power plants ” Again that is not true. GW indicates capacity to produce power. A 1000 MWe nuke plant produces a 1000 MWh every hour 90% of the time. Nuke plants schedule refueling and maintenance on nice spring days when not much power is needed. Solar has 10% capacity factor and makes no power when it is needed. Germany’s 22 GW Solar = 0 GWh on cold winter nights. Germany holds the record for buying junk. Those are the facts. "Solar has 10% capacity factor and makes no power when it is needed." Tell that to Germany, Spain, Italy, Australia,.. Solar follows the sun, like PEAK electric power usage(work day, 9AM to 5AM). "..over forty years of experience is in producing power." and you don't understand peak power says enough. If you had read all the links, you would see that solar is complimented by wind and natural gas turbines for night electric and load balancing. Off-shore wind energy is virtually 24/7. Power need projections are made on ~all large grids as the earth rotates, the sun sky position varies, weather changes, wind changes etc. Germany's power is now mainly it's renewable energy mix. You will say that's a lie, BUT Germany has 'pulled the plug' on nuclear and the lights are still on. http://cleantechnica.com/2012/02/09/clean-energy-loving-germany-increasingly-exporting-electricity-to-nuclear-heavy-france/ Nuclear is expensive and being closed. It's impossible to defend Bush after reading his records and Col. Lechliter's 32 page report. Oh, and German engineering does NOT "holds the record for buying junk." No one can believe you. "9AM to 5PM" correction, http://cleantechnica.com/2012/12/31/top-10-wind-power-stories-of-2012/ “like PEAK electric power usage” Really! I just checked real time grid demand, PJM peaks today at 6 pm. CA ISO peaks at 7 pm. Midwest ISO at 10 pm. RTE (France) at 7 pm. In fact RTE lets you check to see how of much of each source is providing power. At noon solar was producing 0 MW in France and nuclear 56,306 MW. France was exporting 3150 MW to their neighbors. “If you had read all the links, you would see that solar is complimented by wind and natural gas turbines for night electric and load balancing.” I have links to many ISO and there is no reason to think that wind and solar are complementary. “Germany's power” The fact is that Kelly cannot get anything right but keeps repeating links to a nice spring day in Germany. I really care about how power is produced on the PJM because that is where I live. Germany not so much but there is no chance that Kelly has a clue. “It's impossible to defend Bush” I think Bush did a great job on energy issues. See not so impossible after all. “Power need projections are made …” That is correct. We have clearly demonstrated that we can managed around making some power with wind and solar. Not very much where I live. Bush is a proven deserter and election fraud. 50 states and only his gov brother's one has election "chad/vote count problems". The world knows Bush had fewer votes and is a fraud. Defending Bush's year AWOL perhaps says something about Navy service. Renewables are replacing nuclear and fossil fuels. The beauty of customer solar is in reducing our utility bill. So of course power companies lie about distributed solar costs to protect their profits. Like GM crushing EVs and CARB laws, like tobacco companies calling smoking "good" and "healthy", power plant monopolies are lying, blocking progress, and piling up profits. Every five minutes someone switches to solar power. http://www.solarcity.com/ Some areas of the US have 250% more radiant energy than Germany. Non-renewable fuels are being reduced to backup service. The Ameren power company has even voided customer land titles at the Missouri Lake of the Ozarks, besides jacking rates, breaking state and federal laws, failing maintenance, being sued for the above, etc, etc.. “Some areas of the US have 250% more radiant energy than Germany. ” How about where kelly lives and gets his electrical power from the evil Ameren? It would seem that Ameren promotes solar and not blocking progress . http://www.ameren.com/Solar/Pages/SolarEnergyProject.aspx “The goal of Ameren’s Solar Energy Project is to provide a state-of-the-art testing ground to compare various solar technologies. This allows our customers to determine which photovoltaic components will best suit their home or business needs. ” Here the output is provided: http://www.ameren.com/Solar/Pages/EnergyProducedbyTechnologyType.aspx http://www.ameren.com/Solar/Pages/TotalSolarCapacityChart.aspx The four different types of PV have a capacity of 115 kw. The actual annual production for 2012 was 132,327 kwh. That works out to about a 13% capacity factor. Just what you would expect for that part of the country for a utility-scale PV project. The tracking system did 25% and the best fix system got 15%. That called real data for kelly's backyard. The comments to this entry are closed.
# How to hold axis constant with Animate So I am trying to use Animate for my 3D Plot, to see how the function changes with respect to time. But for some reason the z axis keeps rescaling such that the function appears to be constant (but isn't actually). My question is: Is it possible to fix the axis? if yes, how? Here is my code: uh[x_, y_, t_] = 1 + E^(-2 t)*Sin[x]*Sin[y]; Animate[Plot3D[uh[x, y, t], {x, -2 π, 2 π}, {y, -2 π, 2 π}], {t, 0, π}, AnimationRunning -> False] • Use PlotRange -> {0, 2} in Plot3D? – kglr Oct 28 '14 at 0:49 • I tried but I feel like the result should be different... But I dont see what would be wrong. Any idea? Oct 28 '14 at 0:57 • just posted what get in Version 9.0.1.0 (Windows 8) when change the PlotRange. – kglr Oct 28 '14 at 1:02 uh[x_, y_, t_] = 1 + E^(-2 t)*Sin[x]*Sin[y];
# The 404 not found image is not unique This site's 404 not found image is not unique. It's shared with TeX SE. It's entirely possible that this is intentional, but since most sites don't have the same image, I thought it was reasonable to suspect that it was a bug. Sister bug report: on Meta TeX
Concept 40: Accruals and Valuation Adjustments | IFT World 101 Concepts for the Level I Exam # Concept 40: Accruals and Valuation Adjustments Accruals: The accrual accounting principle requires that a firm recognize revenue when they are earned and expenses when they are incurred. At times, there is a timing difference between the cash movements and the recognition of revenues or expenses. In such cases accrual entries are required. If cash is transferred at the same time when the revenue or expense is incurred, there is no need for accrual entries. The four types of accrual entries are: • Unearned revenue: (Cash is received first, and the goods/services will be delivered later.) Increase cash and create a liability for the goods/services that the firm has to provide in future. In December, a newspaper company receives $200 from a customer for the next year’s subscription. Accounting entry on Dec 31st: Cash (asset) ↑$200 and Unearned subscription income (liability) ↑$200 • Accrued revenue: (Goods/services are delivered first, and cash will be received later). Record revenue for the credit sale, and increase account receivables. In December, a laundry company provided$300 worth of services to a customer. The payment will be received next month. Accounting entry on Dec 31st: Revenue (income) ↑$300 and Accounts receivable (asset) ↑$300. • Prepaid expenses: (Cash is paid first, and the expense will be recognized later). Decrease cash and create an asset prepaid expense. In December, a retail store paid $12,000 as advance rent for the next year. Accounting entry on Dec 31st: Rent prepaid (asset) ↑$12,000 and Cash (asset) ↓ $12,000 • Accrued expenses: (Expense is recognized first, cash will be paid later). Record expenses for the credit purchase, and increase accounts payable. A company owes its employees$1,000 in wages for work performed in the month of December. The wages will be paid on 5th January. Accounting entry on Dec 31st: Wages (expense) ↑$1,000 and Wages payable (liability) ↑$1,000 Valuation adjustments: Most assets are recorded on the balance sheet at historical cost. However, accounting standards require that certain assets be shown at current market values. The adjustment required to bring the asset value to the current market value is called valuation adjustment. The accounting equation must be kept in balance. So any valuation adjustment must be offset with an equal change to owner’s equity. This is done either through the income statement or through other comprehensive income. If an asset of $100 falls to$80, then a valuation adjustment of -$20 must be recorded. Also, a loss of$20 must be recorded either on the income statement or in other compressive income.
A completely filled barrel and its contents have a combined weight of 200 lb. A cylinder C is connected to the barrel at a height h = 22 in. as shown. Knowing $$\mu_{s}= 0.40$$ and $$\mu_{k}=0.35$$ , determine the maximum weight of C so the barrel will not tip.
# Vertical and horizontal Asymptotes. • December 29th 2009, 08:36 PM integral Vertical and horizontal Asymptotes. How would you find the asymptotes of: $y=\frac{1}{x^2-1}$ So far all my attempts have come up with something dealing with n/0 (I know how to do it graphically.) • December 29th 2009, 09:22 PM bigwave Quote: Originally Posted by integral How would you find the asymptotes of: $y=\frac{1}{x^2-1}$ So far all my attempts have come up with something dealing with n/0 (I know how to do it graphically.) when the rational function becomes undefined in this case when $x^2-1$becomes $0$ then you have a vertical asymptote. so there is vertical asymptotes at $\pm1$ also, since f(x)=0 has no solutions, the graph a horizonal asymptote at y=0 if the degree of the denominator is greater that the degree of the numerator, the x-axis is the horzontal asymptote also, what appears sometimes to be asymptotes on a graphing calculator is really just a line going from point to point where there really is a hole in the graph • December 29th 2009, 10:05 PM integral Ah, I see. So if an equation can not be set to zero then the point were it is undefined, and y=0 (if the denominator is larger than the numerator) are the asymptotes? And just so I do not spam with more then one thread. Could you please explain How to find points of inflections without using a graph? :D Such as: $\sqrt[3]{x}$ were the point of inflection would be 0 • December 30th 2009, 09:04 AM bigwave basically you find inflection points by taking the $f''(x)$ and setting x = 0 in this case $f''(x) = \left(-\frac{2}{9x^{\frac{5}{3}}}\right)$ and is undefined at x=0
### HOW DOES RANKED-CHOICE WORK? Here's a simplified version of how ranked-choice voting works for multiple candidates seeking one seat. Voters make their first choice and, if desired, their second and third choices, ranked in order. If a candidate gets 50 percent plus one of the first-choice votes, that candidate is elected. If no one gains that threshold, the lowest-ranked candidate is eliminated, plus any other candidate with no mathematical chance of winning. Then the second-choice votes of people who ranked the ousted candidates first are given to the remaining contenders. This process is repeated until a candidate reaches the threshold to win, or, in the case of two remaining candidates, has more votes than the other. Second- and third-choice votes aren't counted until the voter's first and second choices, respectively, are eliminated. Example: Candidate A gets 41 votes; candidate B, 39 votes, and candidate C , 20 votes. Candidate C is eliminated, and that candidate's second-choice votes -- say 12 for candidate B and 8 for candidate A -- are tabulated. Candidate B now has 51 votes, Candidate A has 49, and so B wins. STEVE BRANDT
## Combinatorics Research Group in Xiamen The research interests of our group are in combinatorial theory (algebraic combinatorics and enumerative combinatorics), graph theory (algebraic graph theory, matching theory,digraphs, connectivity of graphs, random walks on graphs) and knot theory (invariants of knots and combinatorial knot theory), as well as their applications in mathematical chemistry, statistical physics and computer science. We have seminars since 2004. Now our group mainly consists of seven professors: Fuji Zhang, Xiaofeng Guo, Lianzhu Zhang, Jianguo Qian, Xian'an Jin, Weigen Yan, Haiyan Chen, two associate professors, Liqiong Xu, Litao Guo, two assistant professors Weiling Yang, Yuan-Hsun Lo. 2015-7-20 ## 讨论班信息 2017秋季图论讨论班 Time:15:00-17:00, Thursday afternoon Venue:Room 105,  Lab Building 12 Oct Spanning Forest Complexes and f-vectors  I by M. Asif 19 Oct Spanning Forest Complexes and f-vectors  II by M. Asif 26 Oct Knot graphs by Qi Yan (Room 108, Lab Building) 1 Nov 青海师范大学 9 Nov Discrete isopermetric problems and the related applications by Mingzu Zhang Abstract: The classical isoperimetric inequality in the Euclidean plane $R^2$ states that for a simple closed curve $M$ of the length $L_M$, enclosing a region of the area $A_M$, one gets ${L_M}^2\geq4\pi A_M$. We will discuss discrete isopermetric problems of the power graph in both edge version and vertex version. The relationship between a continuous nowhere differentiable function, Takagi function, and the edge isopermetric problem of bijective connection network is given. The $h$-extra edge-connectivity of this graphs is also related to some problem about the level set of Takagi function, raised by Donald Knuth. D. Ellis and I. Leader discussed an edge isoperimetric inequality for antipodal subsets of the hypercube and we rewrite their results. We also investigate some properties of vertex isopermetric problem of hypercube. It is also related to the modified Takagi function and can be applied to calculate the $h$-extra connectivity of hypercube. 2016
# What is the least common multiple (LCM) of 4 and 7? Oct 16, 2017 $28$ #### Explanation: First, write a list of the multiples of each number like so: $4 : \text{ } 4 , 8 , 12 , 16 , 20 , 24 , 28 , 32 , 36 , 40 , 44 , 48 , 52 , 56 . . .$ $7 : \text{ } 7 , 14 , 21 , 28 , 35 , 42 , 49 , 56 , 63 , 70 , 77 . . .$ Then you find the first number which $4$ and $7$ have in common. $4$ and $7$ have $28$ and $56$ in common BUT the LCM would be $28$ since that's the LEAST common, or the smallest number that they would have in common. Hope this helped!
# Regression based on function recurrence There is a hypothetical machine which takes an integer $$x$$ and returns an integer $$y$$ such that $$y=F(x)+\varepsilon$$ where $$\varepsilon$$ is an integer. It is known that the function is of the form $$F(\alpha x_1) + \alpha F(x_2) = F(F(x_1 + x_2)) \forall x_1, x_2 \in Z$$ We are building a machine learning model to figure out how the machine behaves. We perform $$N$$ trials. $$x_1, x_2, .. x_N$$ are the inputs to the machine and $$y_1, y_2,. y_N.$$ are the corresponding outputs. We are trying to minimise the Mean Squared Error while fitting. What would be the output of the model given an integer $$x$$ after training? Hint: Plug $$x1 = 0, x2 = n$$ and $$x1 = 1, x2 = n-1$$. Can you infer the functional form of $$F$$ from this? Yes this is a homework question. This is what I have using the hints. $$F(n) = \frac{F(1)-F(0)}{\alpha} + F(n-1)$$ I am not looking for solution but just for ideas. I have no idea how to deal with this problem. Thank you!
# Backup Storage¶ ## Disk Management¶ Proxmox Backup Server comes with a set of disk utilities, which are accessed using the disk subcommand or the web interface. This subcommand allows you to initialize disks, create various filesystems, and get information about the disks. To view the disks connected to the system, navigate to Administration -> Storage/Disks in the web interface or use the list subcommand of disk: # proxmox-backup-manager disk list ┌──────┬────────┬─────┬───────────┬─────────────┬───────────────┬─────────┬────────┐ │ name │ used │ gpt │ disk-type │ size │ model │ wearout │ status │ ╞══════╪════════╪═════╪═══════════╪═════════════╪═══════════════╪═════════╪════════╡ │ sda │ lvm │ 1 │ hdd │ 34359738368 │ QEMU_HARDDISK │ - │ passed │ ├──────┼────────┼─────┼───────────┼─────────────┼───────────────┼─────────┼────────┤ │ sdb │ unused │ 1 │ hdd │ 68719476736 │ QEMU_HARDDISK │ - │ passed │ ├──────┼────────┼─────┼───────────┼─────────────┼───────────────┼─────────┼────────┤ │ sdc │ unused │ 1 │ hdd │ 68719476736 │ QEMU_HARDDISK │ - │ passed │ └──────┴────────┴─────┴───────────┴─────────────┴───────────────┴─────────┴────────┘ To initialize a disk with a new GPT, use the initialize subcommand: # proxmox-backup-manager disk initialize sdX You can create an ext4 or xfs filesystem on a disk using fs create, or by navigating to Administration -> Storage/Disks -> Directory in the web interface and creating one from there. The following command creates an ext4 filesystem and passes the --add-datastore parameter, in order to automatically create a datastore on the disk (in this case sdd). This will create a datastore at the location /mnt/datastore/store1: # proxmox-backup-manager disk fs create store1 --disk sdd --filesystem ext4 --add-datastore true You can also create a zpool with various raid levels from Administration -> Storage/Disks -> ZFS in the web interface, or by using zpool create. The command below creates a mirrored zpool using two disks (sdb & sdc) and mounts it under /mnt/datastore/zpool1: # proxmox-backup-manager disk zpool create zpool1 --devices sdb,sdc --raidlevel mirror Note You can also pass the --add-datastore parameter here, to automatically create a datastore from the disk. You can use disk fs list and disk zpool list to keep track of your filesystems and zpools respectively. Proxmox Backup Server uses the package smartmontools. This is a set of tools used to monitor and control the S.M.A.R.T. system for local hard disks. If a disk supports S.M.A.R.T. capability, and you have this enabled, you can display S.M.A.R.T. attributes from the web interface or by using the command: # proxmox-backup-manager disk smart-attributes sdX Note This functionality may also be accessed directly through the use of the smartctl command, which comes as part of the smartmontools package (see man smartctl for more details). ## Datastore¶ A datastore refers to a location at which backups are stored. The current implementation uses a directory inside a standard Unix file system (ext4, xfs or zfs) to store the backup data. Datastores are identified by a simple ID. You can configure this when setting up the datastore. The configuration information for datastores is stored in the file /etc/proxmox-backup/datastore.cfg. Note The File Layout requires the file system to support at least 65538 subdirectories per directory. That number comes from the 216 pre-created chunk namespace directories, and the . and .. default directory entries. This requirement excludes certain filesystems and filesystem configurations from being supported for a datastore. For example, ext3 as a whole or ext4 with the dir_nlink feature manually disabled. ### Datastore Configuration¶ You can configure multiple datastores. A minimum of one datastore needs to be configured. The datastore is identified by a simple name and points to a directory on the filesystem. Each datastore also has associated retention settings of how many backup snapshots for each interval of hourly, daily, weekly, monthly, yearly as well as a time-independent number of backups to keep in that store. Pruning and Removing Backups and garbage collection can also be configured to run periodically, based on a configured schedule (see Calendar Events) per datastore. #### Creating a Datastore¶ You can create a new datastore from the web interface, by clicking Add Datastore in the side menu, under the Datastore section. In the setup window: • Name refers to the name of the datastore • Backing Path is the path to the directory upon which you want to create the datastore • GC Schedule refers to the time and intervals at which garbage collection runs • Prune Schedule refers to the frequency at which pruning takes place • Prune Options set the amount of backups which you would like to keep (see Pruning and Removing Backups). • Comment can be used to add some contextual information to the datastore. Alternatively you can create a new datastore from the command line. The following command creates a new datastore called store1 on /backup/disk1/store1 # proxmox-backup-manager datastore create store1 /backup/disk1/store1 #### Managing Datastores¶ To list existing datastores from the command line, run: # proxmox-backup-manager datastore list ┌────────┬──────────────────────┬─────────────────────────────┐ │ name │ path │ comment │ ╞════════╪══════════════════════╪═════════════════════════════╡ │ store1 │ /backup/disk1/store1 │ This is my default storage. │ └────────┴──────────────────────┴─────────────────────────────┘ You can change the garbage collection and prune settings of a datastore, by editing the datastore from the GUI or by using the update subcommand. For example, the below command changes the garbage collection schedule using the update subcommand and prints the properties of the datastore with the show subcommand: # proxmox-backup-manager datastore update store1 --gc-schedule 'Tue 04:27' # proxmox-backup-manager datastore show store1 ┌────────────────┬─────────────────────────────┐ │ Name │ Value │ ╞════════════════╪═════════════════════════════╡ │ name │ store1 │ ├────────────────┼─────────────────────────────┤ │ path │ /backup/disk1/store1 │ ├────────────────┼─────────────────────────────┤ │ comment │ This is my default storage. │ ├────────────────┼─────────────────────────────┤ │ gc-schedule │ Tue 04:27 │ ├────────────────┼─────────────────────────────┤ │ keep-last │ 7 │ ├────────────────┼─────────────────────────────┤ │ prune-schedule │ daily │ └────────────────┴─────────────────────────────┘ Finally, it is possible to remove the datastore configuration: # proxmox-backup-manager datastore remove store1 Note The above command removes only the datastore configuration. It does not delete any data from the underlying directory. #### File Layout¶ After creating a datastore, the following default layout will appear: # ls -arilh /backup/disk1/store1 276493 -rw-r--r-- 1 backup backup 0 Jul 8 12:35 .lock 276490 drwxr-x--- 1 backup backup 1064960 Jul 8 12:35 .chunks .lock is an empty file used for process locking. The .chunks directory contains folders, starting from 0000 and increasing in hexadecimal values until ffff. These directories will store the chunked data, categorized by checksum, after a backup operation has been executed. # ls -arilh /backup/disk1/store1/.chunks 545824 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 ffff 545823 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffe 415621 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffd 415620 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffc 353187 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffb 344995 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fffa 144079 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fff9 144078 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fff8 144077 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 fff7 ... 403180 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 000c 403179 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 000b 403177 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 000a 402530 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0009 402513 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0008 402509 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0007 276509 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0006 276508 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0005 276507 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0004 276501 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0003 276499 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0002 276498 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0001 276494 drwxr-x--- 2 backup backup 4.0K Jul 8 12:35 0000 276489 drwxr-xr-x 3 backup backup 4.0K Jul 8 12:35 .. 276490 drwxr-x--- 1 backup backup 1.1M Jul 8 12:35 . Once you've uploaded some backups or created namespaces, you may see the backup type (ct, vm, host) and the start of the namespace hierachy (ns). ### Backup Namespaces¶ A datastore can host many backups, as long as the underlying storage is large enough and provides the performance required for a user's use case. However, without any hierarchy or separation, it's easy to run into naming conflicts, especially when using the same datastore for multiple Proxmox VE instances or multiple users. The backup namespace hierarchy allows you to clearly separate different users or backup sources in general, avoiding naming conflicts and providing a well-organized backup content view. Each namespace level can host any backup type, CT, VM or Host, but also other namespaces, up to a depth of 8 levels, where the root namespace is the first level. #### Namespace Permissions¶ You can make the permission configuration of a datastore more fine-grained by setting permissions only on a specific namespace. To view a datastore, you need a permission that has at least an AUDIT, MODIFY, READ or BACKUP privilege on any namespace it contains. To create or delete a namespace, you require the modify privilege on the parent namespace. Thus, to initially create namespaces, you need to have a permission with an access role that includes the MODIFY privilege on the datastore itself. For backup groups, the existing privilege rules still apply. You either need a privileged enough permission or to be the owner of the backup group; nothing changed here. ### Options¶ There are a few per-datastore options: #### Tuning¶ There are some tuning related options for the datastore that are more advanced: • chunk-order: Chunk order for verify & tape backup: You can specify the order in which Proxmox Backup Server iterates the chunks when doing a verify or backing up to tape. The two options are: • inode (default): Sorts the chunks by inode number of the filesystem before iterating over them. This should be fine for most storages, especially spinning disks. • none Iterates the chunks in the order they appear in the index file (.fidx/.didx). While this might slow down iterating on many slow storages, on very fast ones (for example: NVMEs) the collecting and sorting can take more time than gained through the sorted iterating. This option can be set with: # proxmox-backup-manager datastore update <storename> --tuning 'chunk-order=none' • sync-level: Datastore fsync level: You can set the level of syncing on the datastore for chunks, which influences the crash resistance of backups in case of a powerloss or hard shutoff. There are currently three levels: • none : Does not do any syncing when writing chunks. This is fast and normally OK, since the kernel eventually flushes writes onto the disk. The kernel sysctls dirty_expire_centisecs and dirty_writeback_centisecs are used to tune that behaviour, while the default is to flush old data after ~30s. • filesystem (default): This triggers a syncfs(2) after a backup, but before the task returns OK. This way it is ensured that the written backups are on disk. This is a good balance between speed and consistency. Note that the underlying storage device still needs to protect itself against powerloss to flush its internal ephemeral caches to the permanent storage layer. • file With this mode, a fsync is triggered on every chunk insertion, which makes sure each and every chunk reaches the disk as soon as possible. While this reaches the highest level of consistency, for many storages (especially slower ones) this comes at the cost of speed. For many users the filesystem mode is better suited, but for very fast storages this mode can be OK. This can be set with: # proxmox-backup-manager datastore update <storename> --tuning 'sync-level=filesystem' If you want to set multiple tuning options simultaneously, you can separate them with a comma, like this: # proxmox-backup-manager datastore update <storename> --tuning 'sync-level=filesystem,chunk-order=none' ## Ransomware Protection & Recovery¶ Ransomware is a type of malware that encrypts files until a ransom is paid. Proxmox Backup Server includes features that help mitigate and recover from ransomware attacks by offering off-server and off-site synchronization and easy restoration from backups. ### Built-in Protection¶ Proxmox Backup Server does not rewrite data for existing blocks. This means that a compromised Proxmox VE host or any other compromised system that uses the client to back up data cannot corrupt or modify existing backups in any way. ### The 3-2-1 Rule with Proxmox Backup Server¶ The 3-2-1 rule is simple but effective in protecting important data from all sorts of threats, be it fires, natural disasters or attacks on your infrastructure by adversaries. In short, the rule states that one should create 3 backups on at least 2 different types of storage media, of which 1 copy is kept off-site. Proxmox Backup Server provides tools for storing extra copies of backups in remote locations and on various types of media. By setting up a remote Proxmox Backup Server, you can take advantage of the remote sync jobs feature and easily create off-site copies of your backups. This is recommended, since off-site instances are less likely to be infected by ransomware in your local network. You can configure sync jobs to not remove snapshots if they vanished on the remote-source to avoid that an attacker that took over the source can cause deletions of backups on the target hosts. If the source-host became victim of a ransomware attack, there is a good chance that sync jobs will fail, triggering an error notification. It is also possible to create tape backups as a second storage medium. This way, you get an additional copy of your data on a different storage medium designed for long-term storage. Additionally, it can easily be moved around, be it to an off-site location or, for example, into an on-site fireproof vault for quicker access. ### Restrictive User & Access Management¶ Proxmox Backup Server offers a comprehensive and fine-grained user and access management system. The Datastore.Backup privilege, for example, allows only to create, but not to delete or alter existing backups. The best way to leverage this access control system is to: • Use separate API tokens for each host or Proxmox VE Cluster that should be able to back data up to a Proxmox Backup Server. • Configure only minimal permissions for such API tokens. They should only have a single permission that grants the DataStore access role on a very narrow ACL path that is restricted to a specific namespace on a specific datastore, for example /datastore/tank/pve-abc-cluster. Tip One best practice to protect against ransomware is not to grant delete permissions, but to perform backup pruning directly on Proxmox Backup Server using prune jobs. Please note that the same also applies for sync jobs. By limiting a sync user's or an access token's right to only write backups, not delete them, compromised clients cannot delete existing backups. ### Ransomware Detection¶ A Proxmox Backup Server might still get compromised within insecure networks, if physical access to the server is attained, or due to weak or insufficiently protected credentials. If that happens, and your on-site backups are encrypted by ransomware, the SHA-256 checksums of the backups will not match the previously recorded ones anymore, hence, restoring the backup will fail. To detect ransomware inside a compromised guest, it is recommended to frequently test restoring and booting backups. Make sure to restore to a new guest and not to overwrite your current guest. In the case of many backed-up guests, it is recommended to automate this restore testing. If this is not possible, restoring random samples from the backups periodically (for example, once a week or month), is advised'. In order to be able to react quickly in case of a ransomware attack, it is recommended to regularly test restoring from your backups. Make sure to restore to a new guest and not to overwrite your current guest. Restoring many guests at once can be cumbersome, which is why it is advisable to automate this task and verify that your automated process works. If this is not feasible, it is recommended to restore random samples from your backups. While creating backups is important, verifying that they work is equally important. This ensures that you are able to react quickly in case of an emergency and keeps disruption of your services to a minimum. Verification jobs can also assist in detecting a ransomware presence on a Proxmox Backup Server. Since verification jobs regularly check if all backups still match the checksums on record, they will start to fail if a ransomware starts to encrypt existing backups. Please be aware, that an advanced enough ransomware could circumvent this mechanism. Hence, consider verification jobs only as an additional, but not a sufficient protection measure. ### General Prevention Methods and Best Practices¶ It is recommended to take additional security measures, apart from the ones offered by Proxmox Backup Server. These recommendations include, but are not limited to: • Keeping the firmware and software up-to-date to patch exploits and vulnerabilities (such as Spectre or Meltdown). • Following safe and secure network practices, for example using logging and monitoring tools and dividing your network so that infrastructure traffic and user or even public traffic are separated, for example by setting up VLANs. • Set up a long-term retention. Since some ransomware might lay dormant a couple of days or weeks before starting to encrypt data, it can be that older, existing backups are compromised. Thus, it is important to keep at least a few backups over longer periods of time. For more information on how to avoid ransomware attacks and what to do in case of a ransomware infection, see official government recommendations like CISA's (USA) guide or EU resources like ENSIA's Threat Landscape for Ransomware Attacks or nomoreransom.org.
### Searches for Large Extra Dimensions, Leptoquarks and Heavy Quarks at CMS. Sushil Singh Chauhan We present results from several searches for physics beyond the standard model involving large extra dimensions, leptoquarks, and heavy quarks at $\sqrt s =$ 7 TeV with the CMS experiment. Many different final states are analyzed using the data collected in 2010 and 2011 corresponding to an integrated luminosity up to $5.0fb^{-1}$. The results are used to set new limits on the scale of large extra dimensions and on the masses of leptoquarks and heavy...
Trying to figure out a single variable expression for Sin 1. Jun 6, 2012 mesa Well, I am now trying to figure out a single variable expression for Sin. I have a couple ideas using some pieces of geometric formulas I have played with recently but this is still new to me. I'm not talking about sin x, but an algebraic expression for the sin wave. Any thoughts? Last edited: Jun 6, 2012 2. Jun 6, 2012 mathman You need to clarify what you have in mind. 3. Jun 6, 2012 Number Nine Do you mean that you want an expression for sin that involves only addition/subtraction, multiplication/division and exponentiation? Have you tried a Taylor polynomial? 4. Jun 6, 2012 HallsofIvy Staff Emeritus y= sin(x- ct) is a perfectly good formula for a sine wave moving with speed c. Did you have something else in mind? 5. Jun 6, 2012 mesa An algebraic expression that will compute y for a given x on a sin wave. 6. Jun 6, 2012 mesa Basically I want an algebraic expression (if it can be done in one piece) that will spit out the right y values for a given x What is a Taylor polynomial? 7. Jun 6, 2012 Number Nine Something that you'll learn about in calculus; it's a way of approximating certain kinds of functions using polynomials. The sin function is nice in the sense that it is actually everywhere equal to it's Taylor series. 8. Jun 6, 2012 HallsofIvy Staff Emeritus In terms of the Taylor series expression for x, about x= 0, also called the "McLaurin series", is $x- x^3/3!+ x^5/5!- \cdot\cdot\cdot+ ((-1)^n/(2n+1)!)x^{2n+1}+ \cdot\cdot\cdot$. 9. Jun 6, 2012 mesa Is it possible to get an exact function for sin? 10. Jun 6, 2012 Number Nine With a finite number of terms, it can only be approximated (to any conceivable degree of precision, mind you). I don't think there is any other algebraic expression. 11. Jun 6, 2012 Vorde Well including the complex plane I'd reckon that $\frac{e^ix-cos(x)}{i}$ should be equal to sin(x), but I doubt thats what you are looking for. I don't think there is a non-infinite algebraic function that will accomplish what you are looking for. 12. Jun 6, 2012 mesa Technically correct but not quite what I'm looking for, cos put's me back to square one lol Interesting, well lets give it a shot. Any thoughts on where to start? I was going to use the unit circle and partially filled areas to see what I can pull from that. Last edited: Jun 6, 2012 13. Jun 6, 2012 HallsofIvy Staff Emeritus I still don't know what you mean by this. "y= sin(x)" is a perfectly good function and is every bit as "exact" as, say, $\sqrt{x}$ and $e^x$. So the question is what do you mean by "exact" here? 14. Jun 6, 2012 mesa That it is, but I want to see what sin actually looks like algebraically, and it should be a fun exercise. 15. Jun 6, 2012 Number Nine It doesn't look like anything, algebraically; it's a trigonometric function. The closest you can get is it's Taylor series, which has already been posted. 16. Jun 6, 2012 mesa Most likely true. Worst case this is a good exercise for an older returning student. I want to get a strong handle on this stuff and I really enjoy digging into what is taught in my classes. I have an idea about what I want to try and would love some input from people like you. Heck, you likely will have a completely different and better strategy than mine, but most importantly I will learn from this process! Math is a big subject and I am regularly amazed at what it can accomplish at the hands of those who know how to wield it ;) 17. Jun 6, 2012 Vorde Well I'd say the only luck you are going to get is with the taylor polynomial, but that won't tell you what sine 'actually looks like'- just how to mimic it to an arbitrary precision. I can't see a way to write an equation for sine without using other trigonometric functions, but good luck to you! 18. Jun 15, 2012 mesa Well I have to admit this one has been challenging, although I'm not throwing in the towel yet. 19. Jun 15, 2012 Number Nine Well, you should. As has been clearly explained to you, sin has no expression in term of elementary algebraic operations. Trying to square the circle is not noble, just futile. 20. Jun 15, 2012 Bob S Here is another way. Do you remember the Pythagorean theorem? A2 + B2 = C2, where C is the hypotenouse, and A is close to the origin? Choose C=1, and using the angle between A and C, slowly increase the angle from 0 to 90 (or 180) degrees. The length of B is equal to the sine of the angle.
# Difference between weighted average regression and locally weighted regression? I was reading about Locally weighted regression in paper written on Locally Weighted Learning by CHRISTOPHER G. ATKESON1, ANDREW W. MOORE and STEFAN SCHAAL that came up in Artificial intelligence review. . But I could not understand it fully. Especially the difference they say between the Distance weighted averaging and local weighted regression. The equation to find the $\hat y$ which is the prediction got with respect to the training set given. In general, C(q) = $\sum_{i=0}^n\biggr(\big (\hat y - y_i )^2 \times K(d(x_i, q)) \biggr)$ For Distance weighted averaging, $$\hat y = \frac {\sum y_i K(d(x_i, q)} {K(d(x_i, q)}$$ For locally weighted regression, $$\hat y = x_i ^ T \beta$$ I am clear that I am missing something while understanding these two as I am not able to understand the physical interpretation diagram they have given, for distance weighted averaging it is for Locally weighted averaging it is Can some one explain why the author says in case of locally weighted averaging the string can pull in such a way that the line can translate and rotate and whereas in weighted averaging it can only move up or down ?
# Δ ℚuantitative √ourney ## Q-learning with Neural Networks ### Learning Gridworld with Q-learning¶ #### Introduction¶ We've finally made it. We've made it to what we've all been waiting for, Q-learning with neural networks. Since I'm sure a lot of people didn't follow parts 1 and 2 because they were kind of boring, I will attempt to make this post relatively (but not completely) self-contained. In this post, we will dive into using Q-learning to train an agent (player) how to play Gridworld. Gridworld is a simple text based game in which there is a 4x4 grid of tiles and 4 objects placed therein: a player, pit, goal, and a wall. The player can move up/down/left/right ($a \in A \{up,down,left,right\}$) and the point of the game is to get to the goal where the player will receive a numerical reward. Unfortunately, we have to avoid a pit, because if we land on the pit we are penalized with a negative 'reward'. As if our task wasn't difficult enough, there's also a wall that can block the player's path (but it offers no reward or penalty). #### Quick Review of Terms and Concepts (skip if you followed parts 1 & 2)¶ A state is all the information necessary (e.g. pixel data in a game) to make a decision that you expect will take you to a new (higher value) state. The high level function of reinforcement learning is to learn the values of states or state-action pairs (the value of taking action $a$ given we're in state $s$). The value is some notion of how "good" that state or action is. Generally this is a function of rewards received now or in the future as a result of taking some action or being in some state. A policy, denoted $\pi$, is the specific strategy we take in order to get into high value states or take high value actions to maximize our rewards over time. For example, a policy in blackjack might be to always hit until we have 19. We denote a function, $\pi(s)$ that accepts a state $s$ and returns the action to be taken. Generally $\pi(s)$ as a function just evaluates the value of all possible actions given the state $s$ and returns the highest value action. This will result in a specific policy $\pi$ that may change over time as we improve our value estimates. We call the function that accepts a state $s$ and returns the value of that state $v_{\pi}(s)$. This is the value function. Similarly, there is an action-value function $Q(s, a)$ that accepts a state $s$ and an action $a$ and returns the value of taking that action given that state. Some RL algorithms or implementations will use one or the other. Importantly, if we base our algorithm on learning state-values (as opposed to action-values), we must keep in mind that the value of a state depends completely on our policy $\pi$. Using blackjack as an example, if we're in the state of having a card total of 20, and have two possible actions, hit or stay, the value of this state is only high if our policy says to stay when we have 20. If our policy said to hit when we have 20, we would probably bust and lose the game, thus the value of that state would be low. More formally, the value of a state is equivalent to the value of the highest action taken in that state. #### What is Q-learning?¶ Q-learning, like virtually all RL methods, is one type of algorithm used to calculate state-action values. It falls under the class of temporal difference (TD) algorithms, which suggests that time differences between actions taken and rewards received are involved. In part 2 where we used a Monte Carlo method to learn to play blackjack, we had to wait until the end of a game (episode) to update our state-action values. With TD algorithms, we make updates after every action taken. In most cases, that makes more sense. We make a prediction (based on previous experience), take an action based on that prediction, receive a reward and then update our prediction. (Btw: Don't confuse the "Q" in Q-learning with the $Q$ function we've discussed in the previous parts. The $Q$ function is always the name of the function that accepts states and actions and spits out the value of that state-action pair. RL methods involve a $Q$ function but aren't necessarily Q-learning algorithms.) Here's the tabular Q-learning update rule: $$Q(S_t, A_t) \leftarrow Q(S_t, A_t) + \alpha[R_{t+1} + \gamma maxQ(S_{t+1}, a_{t+1}) - Q(S_t, A_t)]$$ So, like Monte Carlo, we could have a table that stores the Q-value for every possible state-action pair and iteratively update this table as we play games. Our policy $\pi$ would be based on choosing the action with the highest Q value for that given state. But we're done with tables. This is 2015, we have GPUs and stuff. Well, as I alluded to in part 2, our $Q(s,a)$ function doesn't have to just be a lookup table. In fact, in most interesting problems, our state-action space is much too large to store in a table. Imagine a very simplified game of Pacman. If we implement it as a graphics-based game, the state would be the raw pixel data. In a tabular method, if the pixel data changes by just a single pixel, we have to store that as a completely separate entry in the table. Obviously that's silly and wasteful. What we need is some way to generalize and pattern match between states. We need our algorithm to say "the value of these kind of states is X" rather than "the value of this exact, super specific state is X." That's where neural networks come in. Or any other type of function approximator, even a simple linear model. We can use a neural network, instead of a lookup table, as our $Q(s,a)$ function. Just like before, it will accept a state and an action and spit out the value of that state-action. Importantly, however, unlike a lookup table, a neural network also has a bunch of parameters associated with it. These are the weights. So our $Q$ function actually looks like this: $Q(s, a, \theta)$ where $\theta$ is a vector of parameters. And instead of iteratively updating values in a table, we will iteratively update the $\theta$ parameters of our neural network so that it learns to provide us with better estimates of state-action values. Of course we can use gradient descent (backpropagation) to train our $Q$ neural network just like any other neural network. But what's our target y vector (expected output vector)? Since the net is not a table, we don't use the formula shown above, our target is simply: $r_{t+1} + \gamma * maxQ(s', a')$ for the state-action that just happened. $\gamma$ is a parameter $0\rightarrow1$ that is called the discount factor. Basically it determines how much each future reward is taken into consideration for updating our Q-value. If $\gamma$ is close to 0, we heavily discount future rewards and thus mostly care about immediate rewards. $s'$ refers to the new state after having taken action $a$ and $a'$ refers to the next actions possible in this new state. So $maxQ(s', a')$ means we calculate all the Q-values for each state-action pair in the new state, and take the maximium value to use in our new value update. (Note I may use $s' \text{ and } a'$ interchangeably with $s_{t+1} \text{ and } a_{t+1}$.) One important note: our reward update for every state-action pair is $r_{t+1} + \gamma*maxQ(s_{t+1}, a)$ except when the state $s'$ is a terminal state. When we've reached a terminal state, the reward update is simply $r_{t+1}$. A terminal state is the last state in an episode. In our case, there are 2 terminal states: the state where the player fell into the pit (and receives -10) and the state where the player has reached the goal (and receives +10). Any other state is non-terminal and the game is still in progress. There are two keywords I need to mention as well: on-policy and off-policy methods. In on-policy methods we iteratively learn about state values at the same time that we improve our policy. In other words, the updates to our state values depend on the policy. In contrast, off-policy methods do not depend on the policy to update the value function. Q-learning is an off-policy method. It's advantageous because with off-policy methods, we can follow one policy while learning about another. For example, with Q-learning, we could always take completely random actions and yet we would still learn about another policy function of taking the best actions in every state. If there's ever a $\pi$ referenced in the value update part of the algorithm then it's an on-policy method. ### Gridworld Details¶ Before we get too deep into the neural network Q-learning stuff, let's discuss the Gridworld game implementation that we're using as our toy problem. We're going to implement 3 variants of the game in order of increasing difficulty. The first version will initialize a grid in exactly the same way each time. That is, every new game starts with the player (P), goal (+), pit (-), and wall (W) in exactly the same positions. Thus the algorithm just needs to learn how to take the player from a known starting position to a known end position without hitting the pit, which gives out negative rewards. The second implementation is slightly more difficult. The goal, pit and wall will always be initialized in the same positions, but the player will be placed randomly on the grid on each new game. The third implementation is the most difficult to learn, and that's where all elements are randomly placed on the grid each game. Let's get to coding. In [1]: import numpy as np def randPair(s,e): return np.random.randint(s,e), np.random.randint(s,e) #finds an array in the "depth" dimension of the grid def findLoc(state, obj): for i in range(0,4): for j in range(0,4): if (state[i,j] == obj).all(): return i,j #Initialize stationary grid, all items are placed deterministically def initGrid(): state = np.zeros((4,4,4)) #place player state[0,1] = np.array([0,0,0,1]) #place wall state[2,2] = np.array([0,0,1,0]) #place pit state[1,1] = np.array([0,1,0,0]) #place goal state[3,3] = np.array([1,0,0,0]) return state #Initialize player in random location, but keep wall, goal and pit stationary def initGridPlayer(): state = np.zeros((4,4,4)) #place player state[randPair(0,4)] = np.array([0,0,0,1]) #place wall state[2,2] = np.array([0,0,1,0]) #place pit state[1,1] = np.array([0,1,0,0]) #place goal state[1,2] = np.array([1,0,0,0]) a = findLoc(state, np.array([0,0,0,1])) #find grid position of player (agent) w = findLoc(state, np.array([0,0,1,0])) #find wall g = findLoc(state, np.array([1,0,0,0])) #find goal p = findLoc(state, np.array([0,1,0,0])) #find pit if (not a or not w or not g or not p): #print('Invalid grid. Rebuilding..') return initGridPlayer() return state #Initialize grid so that goal, pit, wall, player are all randomly placed def initGridRand(): state = np.zeros((4,4,4)) #place player state[randPair(0,4)] = np.array([0,0,0,1]) #place wall state[randPair(0,4)] = np.array([0,0,1,0]) #place pit state[randPair(0,4)] = np.array([0,1,0,0]) #place goal state[randPair(0,4)] = np.array([1,0,0,0]) a = findLoc(state, np.array([0,0,0,1])) w = findLoc(state, np.array([0,0,1,0])) g = findLoc(state, np.array([1,0,0,0])) p = findLoc(state, np.array([0,1,0,0])) #If any of the "objects" are superimposed, just call the function again to re-place if (not a or not w or not g or not p): #print('Invalid grid. Rebuilding..') return initGridRand() return state The state is a 3-dimensional numpy array (4x4x4). You can think of the first two dimensions as the positions on the board; e.g. row 1, column 2 is the position (1,2) [zero indexed] on the board. The 3rd dimension encodes the object/element at that position. Since there are 4 different possible objects, the 3rd dimension of the state contains vectors of length 4. We're using a one-hot encoding for the elements except that the empty position is just a vector of all zeros. So with a 4 length vector we're encoding 5 possible options at each grid position: empty, player, goal, pit, or wall. You can also think of the 3rd dimension as being divided into 4 separate grid planes, where each plane represents the position of each element. So below is an example where the player is at grid position (3,0), the wall is at (0,0), the pit is at (0,1) and the goal is at (1,0). [All other elements are 0s] In our simple implementation it's possible for the board to be initialized such that some of the objects contain a 1 at the same "x,y" position (but different "z" positions), which indicates they're at the same position on the grid. Obviously we don't want to initialize the board in this way, so for the last 2 variants of the game that involve some element of random initialization, we check if we can find "clean" arrays (only one "1" in the 'Z' dimension of a particular grid position) of the various element types on the grid and if not, we just recursively call the initialize grid function until we get a state where elements are not superimposed. When the player successfully plays the game and lands on the goal, the player and goal positions will be superimposed and that is how we know the player has won (likewise if the player hits the pit and loses). The wall is supposed to block the movement of the player so we prevent the player from taking an action that would place them at the same position as the wall. Additionally, the grid is "enclosed" so that player cannot walk through the edges of the grid. Now we will implement the movement function. In [1]: def makeMove(state, action): #need to locate player in grid #need to determine what object (if any) is in the new grid spot the player is moving to player_loc = findLoc(state, np.array([0,0,0,1])) wall = findLoc(state, np.array([0,0,1,0])) goal = findLoc(state, np.array([1,0,0,0])) pit = findLoc(state, np.array([0,1,0,0])) state = np.zeros((4,4,4)) actions = [[-1,0],[1,0],[0,-1],[0,1]] #e.g. up => (player row - 1, player column + 0) new_loc = (player_loc[0] + actions[action][0], player_loc[1] + actions[action][1]) if (new_loc != wall): if ((np.array(new_loc) <= (3,3)).all() and (np.array(new_loc) >= (0,0)).all()): state[new_loc][3] = 1 new_player_loc = findLoc(state, np.array([0,0,0,1])) if (not new_player_loc): state[player_loc] = np.array([0,0,0,1]) #re-place pit state[pit][1] = 1 #re-place wall state[wall][2] = 1 #re-place goal state[goal][0] = 1 return state The first thing we do is try to find the positions of each element on the grid (state). Then it's just a few simple if-conditions. We need to make sure the player isn't trying to step on the wall and make sure that the player isn't stepping outside the bounds of the grid. Now we implement getLoc which is similar to findLoc but can identify superimposed elements, whereas findLoc would miss it (intentionally) if there was superimposition. Additionally, we'll implement our reward function, which will award +10 if the player steps onto the goal, -10 if the player steps into the pit, and -1 for any other move. These rewards are pretty arbitrary, as long as the goal has a significantly higher reward than the pit, the algorithm should do fine. Lastly, I've implemented a function that will display our grid as a text array so we can see what's going on. In [3]: def getLoc(state, level): for i in range(0,4): for j in range(0,4): if (state[i,j][level] == 1): return i,j def getReward(state): player_loc = getLoc(state, 3) pit = getLoc(state, 1) goal = getLoc(state, 0) if (player_loc == pit): return -10 elif (player_loc == goal): return 10 else: return -1 def dispGrid(state): grid = np.zeros((4,4), dtype=') player_loc = findLoc(state, np.array([0,0,0,1])) wall = findLoc(state, np.array([0,0,1,0])) goal = findLoc(state, np.array([1,0,0,0])) pit = findLoc(state, np.array([0,1,0,0])) for i in range(0,4): for j in range(0,4): grid[i,j] = ' ' if player_loc: grid[player_loc] = 'P' #player if wall: grid[wall] = 'W' #wall if goal: grid[goal] = '+' #goal if pit: grid[pit] = '-' #pit return grid And that's it. That's the entire gridworld game implementation. Not too bad right? As with my part 2 blackjack implementation, this game is not using OOP-style and implemented in a functional style where we just pass around states. Let's demonstrate some gameplay. I'll be using the initGridRand() variant so that all items are placed randomly. In [422]: state = initGridRand() dispGrid(state) Out[422]: array([['P', '-', ' ', ' '], [' ', ' ', ' ', ' '], [' ', ' ', 'W', ' '], [' ', '+', ' ', ' ']], dtype=' As you can see, I clearly need to move 3 spaces down, and 1 space to the right to land on the goal. Remember, our action encoding is: 0 = up, 1 = down, 2 = left, 3 = right. In [423]: state = makeMove(state, 1) state = makeMove(state, 1) state = makeMove(state, 1) state = makeMove(state, 3) print('Reward: %s' % (getReward(state),)) dispGrid(state) Reward: 10 Out[423]: array([[' ', '-', ' ', ' '], [' ', ' ', ' ', ' '], [' ', ' ', 'W', ' '], [' ', ' ', ' ', ' ']], dtype=' We haven't implemented a display for when the player is on the goal or pit so the player and goal just disappear when that happens. ### Neural Network as our Q function¶ Now for the fun part. Let's build our neural network that will serve as our $Q$ function. Since this is a post about Q-learning, I'm not going to code a neural network from scratch. I'm going to use the fairly popular Theano-based library Keras. You can of course use whatever library you want, or roll your own. Important Note: Up until now, I've been talking about how the neural network can serve the role of $Q(s, a)$, and that's absolutely true. However, I will be implementing our neural network in the same way that Google DeepMind did for its Atari playing algorithm. Instead of a neural network architecture that accepts a state and an action as inputs and outputs the value of that single state-action pair, DeepMind built a network that just accepts a state and outputs separate Q-values for each possible action in its output layer. This is pretty clever because in Q-learning we need to get the $maxQ(s', a')$ [max of the Q values for every possible action in the new state s']. Rather than having to run our network forward for every action, we just need to run it forward once. The result is the same, however, it's just more efficient. In [4]: from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.optimizers import RMSprop In [20]: model = Sequential() #model.add(Dropout(0.2)) I'm not using dropout, but maybe you wanna give it a try? model.add(Activation('linear')) #linear output so we can have range of real-valued outputs rms = RMSprop() model.compile(loss='mse', optimizer=rms) In [384]: model.predict(state.reshape(1,64), batch_size=1) #just to show an example output; read outputs left to right: up/down/left/right Out[384]: array([[-0.02812552, -0.04649779, -0.08819015, -0.00723661]]) So that's the network I've designed. An input layer of 64 units (because our state has a total of 64 elements, remember its a 4x4x4 numpy array), 2 hidden layers of 164 and 150 units, and an output layer of 4, one for each of our possible actions (up, down, left, right) [in that order]. Why did I make the network like this? Honestly, I have no good answer for that. I just messed around with different hidden layer architectures and this one seemed to work fairly well. Feel free to change it up. There's probably a better configuration. (If you discover or know of a much better network architecture for this, let me know). ### Online Training¶ Below is the implementation for the main loop of the algorithm. In broad strokes: 1. Setup a for-loop to number of epochs 2. In the loop, setup while loop (while game is in progress) 3. Run Q network forward. 4. We're using an epsilon greedy implementation, so at time t with probability $\epsilon$ we will choose a random action. With probability $1-\epsilon$ we will choose the action associated with the highest Q value from our neural network. 5. Take action $a$ as determined in (4), observe new state $s'$ and reward $r_{t+1}$ 6. Run the network forward using $s'$. Store the highest Q value (maxQ). 7. Our target value to train the network is reward + (gamma * maxQ) where gamma is a parameter ($0 <= \gamma <= 1$). 8. Given that we have 4 outputs and we only want to update/train the output associated with the action we just took, our target output vector is the same as the output vector from the first run, except we change the one output associated with our action to: reward + (gamma * maxQ) 9. Train the model on this 1 sample. Repeat process 2-9 Just to be clear, when we first run our neural network and get an output of action-values like this array([[-0.02812552, -0.04649779, -0.08819015, -0.00723661]]) our target vector for one iteration may look like this: array([[-0.02812552, -0.04649779, 10, -0.00723661]]) Also note, I initialize epsilon (for the $\epsilon$-greedy action selection) to be 1. It decrements by a small amount on every iteration and will eventually reach 0.1 where it stays. Google DeepMind also used an $\epsilon$-greedy action selection and also initialized epsilon to be 1 and decremented during the game play. if taking action 2 one step (left) resulted in reaching the goal. So we just keep all other outputs the same as before and just change the one for the action we took. Okay, so let's go ahead and train our algorithm to learn the easiest variant of the game, where all elements are placed deterministically at the same positions every time. In [29]: from IPython.display import clear_output import random epochs = 1000 gamma = 0.9 #since it may take several moves to goal, making gamma high epsilon = 1 for i in range(epochs): state = initGrid() status = 1 #while game still in progress while(status == 1): #We are in state S #Let's run our Q function on S to get Q values for all possible actions qval = model.predict(state.reshape(1,64), batch_size=1) if (random.random() < epsilon): #choose random action action = np.random.randint(0,4) else: #choose best action from Q(s,a) values action = (np.argmax(qval)) #Take action, observe new state S' new_state = makeMove(state, action) #Observe reward reward = getReward(new_state) #Get max_Q(S',a) newQ = model.predict(new_state.reshape(1,64), batch_size=1) maxQ = np.max(newQ) y = np.zeros((1,4)) y[:] = qval[:] if reward == -1: #non-terminal state update = (reward + (gamma * maxQ)) else: #terminal state update = reward y[0][action] = update #target output print("Game #: %s" % (i,)) model.fit(state.reshape(1,64), y, batch_size=1, nb_epoch=1, verbose=1) state = new_state if reward != -1: status = 0 clear_output(wait=True) if epsilon > 0.1: epsilon -= (1/epochs) Game #: 999 Epoch 1/1 1/1 [==============================] - 0s - loss: 0.0265 Alright, so I've empirically tested this and it trains on the easy variant with just 1000 epochs (keep in mind every epoch is a full game played to completion). Below I've implemented a function we can use to test our trained algorithm to see if it has properly learned how to play the game. It basically just uses the neural network model to calculate action-values for the current state and selects the action with the highest Q-value. It just repeats this forever until the game is won or lost. I've made it break out of this loop if it is making more than 10 moves because this probably means it hasn't learned how to win and we don't want an infinite loop running. In [6]: def testAlgo(init=0): i = 0 if init==0: state = initGrid() elif init==1: state = initGridPlayer() elif init==2: state = initGridRand() print("Initial State:") print(dispGrid(state)) status = 1 #while game still in progress while(status == 1): qval = model.predict(state.reshape(1,64), batch_size=1) action = (np.argmax(qval)) #take action with highest Q-value print('Move #: %s; Taking action: %s' % (i, action)) state = makeMove(state, action) print(dispGrid(state)) reward = getReward(state) if reward != -1: status = 0 print("Reward: %s" % (reward,)) i += 1 #If we're taking more than 10 actions, just stop, we probably can't win this game if (i > 10): print("Game lost; too many moves.") break In [30]: testAlgo(init=0) Initial State: [[' ' 'P' ' ' ' '] [' ' '-' ' ' ' '] [' ' ' ' 'W' ' '] [' ' ' ' ' ' '+']] Move #: 0; Taking action: 3 [[' ' ' ' 'P' ' '] [' ' '-' ' ' ' '] [' ' ' ' 'W' ' '] [' ' ' ' ' ' '+']] Move #: 1; Taking action: 3 [[' ' ' ' ' ' 'P'] [' ' '-' ' ' ' '] [' ' ' ' 'W' ' '] [' ' ' ' ' ' '+']] Move #: 2; Taking action: 1 [[' ' ' ' ' ' ' '] [' ' '-' ' ' 'P'] [' ' ' ' 'W' ' '] [' ' ' ' ' ' '+']] Move #: 3; Taking action: 1 [[' ' ' ' ' ' ' '] [' ' '-' ' ' ' '] [' ' ' ' 'W' 'P'] [' ' ' ' ' ' '+']] Move #: 4; Taking action: 1 [[' ' ' ' ' ' ' '] [' ' '-' ' ' ' '] [' ' ' ' 'W' ' '] [' ' ' ' ' ' ' ']] Reward: 10 Can we get a round of applause for our gridworld player here? Clearly it knows what its doing; it went straight for the prize! ### Playing the the harder variant, catastrophic forgetting, and experience replay¶ We're slowly building up our chops and we want our algorithm to train on the harder variant of the game where every new game the player is randomly placed on the grid. It can't just memorize a sequence of steps to take as before, it needs to be able to take the shortest path to the goal (without stepping into the pit) from wherever it starts on the grid. It needs to develop a slightly more sophisticated representation of its environment. Unfortunately, there is a problem we may need to deal with as our problem becomes increasingly more difficult. There is a known problem called catastrophic forgetting that is associated with gradient descent based training methods in online training. Imagine that in game #1 that our algorithm is training on (learning Q-values for) the player is placed in between the pit and the goal such that the goal is on the right and the pit is on the left. Using epsilon-greedy strategy, the player takes a random move and by chance takes a step to the right and hits the goal. Great, the algorithm will try to learn that this state-action pair is associated with a high reward by updating its weights in such a way that the output will more closely match the target value (i.e backpropagation). Now, the second game gets initialized and the player is again in between the goal and pit but this time the goal is on the left and the pit is on the right. Perhaps to our naive algorithm, the state seems very similar to the last game. Let's say that again, the player chooses to make one step to the right, but this time it ends up in the pit and gets -10 reward. The player is thinking "what the hell I thought going to the right was the best decision based on my previous experience." So now it may do backpropagation again to update its state-action value but because this state-action is very similar to the last learned state-action it may mess up its previously learned weights. This is the essence of catastrophic forgetting. There's a push-pull between very similar state-actions (but with divergent targets) that results in this inability to properly learn anything. We generally don't have this problem in the supervised learning realm because we do randomized batch learning, where we don't update our weights until we've iterated through some random subset of our training data. Catastrophic forgetting is probably not something we have to worry about with the first variant of our game because the targets are always stationary; but with the harder variants, it's something we should consider, and that is why I'm implementing something called experience replay. Experience replay basically gives us minibatch updating in an online learning scheme. It's actually not a huge deal to implement; here's how it works. Experience replay: 1. In state $s$, take action $a$, observe new state $s_{t+1}$ and reward $r_{t+1}$ 2. Store this as a tuple $(s, a, s_{t+1}, r_{t+1})$ in a list. 3. Continue to store each experience in this list until we have filled the list to a specific length (up to you to define) 4. Once the experience replay memory is filled, randomly select a subset (e.g. 40) 5. Iterate through this subset and calculate value updates for each; store these in a target array (e.g. y_train) and store the state $s$ of each memory in X_train 6. Use X_train and y_train as a minibatch for batch training. For subsequent epochs where the array is full, just overwrite old values in our experience replay memory array. Thus, in addition to learning the action-value for the action we just took, we're also going to use a random sample of our past experiences to train on to prevent catastrophic forgetting. So here's the same training algorithm from above except with experience replay added. Remember, this time we're training it on the harder variant of the game where the player is randomly placed on the grid. In [21]: model.compile(loss='mse', optimizer=rms)#reset weights of neural network epochs = 3000 gamma = 0.975 epsilon = 1 batchSize = 40 buffer = 80 replay = [] #stores tuples of (S, A, R, S') h = 0 for i in range(epochs): state = initGridPlayer() #using the harder state initialization function status = 1 #while game still in progress while(status == 1): #We are in state S #Let's run our Q function on S to get Q values for all possible actions qval = model.predict(state.reshape(1,64), batch_size=1) if (random.random() < epsilon): #choose random action action = np.random.randint(0,4) else: #choose best action from Q(s,a) values action = (np.argmax(qval)) #Take action, observe new state S' new_state = makeMove(state, action) #Observe reward reward = getReward(new_state) #Experience replay storage if (len(replay) < buffer): #if buffer not filled, add to it replay.append((state, action, reward, new_state)) else: #if buffer full, overwrite old values if (h < (buffer-1)): h += 1 else: h = 0 replay[h] = (state, action, reward, new_state) #randomly sample our experience replay memory minibatch = random.sample(replay, batchSize) X_train = [] y_train = [] for memory in minibatch: #Get max_Q(S',a) old_state, action, reward, new_state = memory old_qval = model.predict(old_state.reshape(1,64), batch_size=1) newQ = model.predict(new_state.reshape(1,64), batch_size=1) maxQ = np.max(newQ) y = np.zeros((1,4)) y[:] = old_qval[:] if reward == -1: #non-terminal state update = (reward + (gamma * maxQ)) else: #terminal state update = reward y[0][action] = update X_train.append(old_state.reshape(64,)) y_train.append(y.reshape(4,)) X_train = np.array(X_train) y_train = np.array(y_train) print("Game #: %s" % (i,)) model.fit(X_train, y_train, batch_size=batchSize, nb_epoch=1, verbose=1) state = new_state if reward != -1: #if reached terminal state, update game status status = 0 clear_output(wait=True) if epsilon > 0.1: #decrement epsilon over time epsilon -= (1/epochs) Game #: 2999 Epoch 1/1 40/40 [==============================] - 0s - loss: 0.0018 I've increased the training epochs to 3000 just based on empiric testing. So let's see how it does, we'll run our testAlgo() function a couple times to see how it handles randomly initialized player scenarios. In [22]: testAlgo(1) #run testAlgo using random player placement => initGridPlayer() Initial State: [[' ' ' ' ' ' ' '] [' ' '-' '+' ' '] [' ' ' ' 'W' ' '] [' ' ' ' 'P' ' ']] Move #: 0; Taking action: 3 [[' ' ' ' ' ' ' '] [' ' '-' '+' ' '] [' ' ' ' 'W' ' '] [' ' ' ' ' ' 'P']] Move #: 1; Taking action: 0 [[' ' ' ' ' ' ' '] [' ' '-' '+' ' '] [' ' ' ' 'W' 'P'] [' ' ' ' ' ' ' ']] Move #: 2; Taking action: 0 [[' ' ' ' ' ' ' '] [' ' '-' '+' 'P'] [' ' ' ' 'W' ' '] [' ' ' ' ' ' ' ']] Move #: 3; Taking action: 2 [[' ' ' ' ' ' ' '] [' ' '-' ' ' ' '] [' ' ' ' 'W' ' '] [' ' ' ' ' ' ' ']] Reward: 10 Fantastic. Let's run the testAlgo() one more time just to prove it has generalized. In [28]: testAlgo(init=1) #Of course, I ran it many times more than I'm showing here Initial State: [[' ' ' ' ' ' ' '] [' ' '-' '+' ' '] [' ' 'P' 'W' ' '] [' ' ' ' ' ' ' ']] Move #: 0; Taking action: 2 [[' ' ' ' ' ' ' '] [' ' '-' '+' ' '] ['P' ' ' 'W' ' '] [' ' ' ' ' ' ' ']] Move #: 1; Taking action: 0 [[' ' ' ' ' ' ' '] ['P' '-' '+' ' '] [' ' ' ' 'W' ' '] [' ' ' ' ' ' ' ']] Move #: 2; Taking action: 0 [['P' ' ' ' ' ' '] [' ' '-' '+' ' '] [' ' ' ' 'W' ' '] [' ' ' ' ' ' ' ']] Move #: 3; Taking action: 3 [[' ' 'P' ' ' ' '] [' ' '-' '+' ' '] [' ' ' ' 'W' ' '] [' ' ' ' ' ' ' ']] Move #: 4; Taking action: 3 [[' ' ' ' 'P' ' '] [' ' '-' '+' ' '] [' ' ' ' 'W' ' '] [' ' ' ' ' ' ' ']] Move #: 5; Taking action: 1 [[' ' ' ' ' ' ' '] [' ' '-' ' ' ' '] [' ' ' ' 'W' ' '] [' ' ' ' ' ' ' ']] Reward: 10 I'll be darned. It seems to have learned to play the game from any starting position! Pretty neat. ### The Hardest Variant¶ Okay, I lied. I will not be showing you the algorithm learning the hardest variant of the game (where all 4 elements are randomly placed on the grid each game). I'm leaving that up to you to attempt and let me know how it goes via email ([email protected]). The reason is, I'm doing all this on a Macbook Air (read: no CUDA gpu) and thus I cannot train the algorithm to a sufficiently large number of epochs for it to learn the problem. I suspect it may require significantly more epochs, perhaps more than 50,000. So if you have an nVIDIA GPU and can train it that long, let me know if it works. I could have used Lua/Torch7 since there is an OpenCL version but no one would read this if it wasn't in Python =P. ### Conclusion¶ There you have it, basic Q-learning using neural networks. That was a lot to go through, hopefully I didn't make too many mistakes (as always, email if you spot any so I can post corrections). I'm hoping you have success training Q-learning algorithms on more interesting problems than the gridworld game. I'd say this is definitely the climax of the series on reinforcement learning. I plan to release a part 4 that will be about other temporal difference learnings algorithms that use eligibility traces. Since that's a relatively minor new concept, I will likely use it on another toy problem like gridworld. However, I do, at some point, want to release a post about setting up and using the Arcade Learning Environment (ALE) [fmr. Atari Learning Environment] and training an alogorithm to play Atari games, however, that will likely be a long while from now so don't hold your breath. Cheers
# Integer solution to exponential diophantine equation 1. Sep 28, 2011 ### adoado Hey everyone! I was recently scribbling on paper, and after a series of ideas, I got stuck with a problem. That is, can I find out if there exists some integers A and B such that $C=2^{A}3^{B}$ For some integer C? For an arbitrary C, how do I know whether some $A, B \in \textbf{Z}$ exist? Cheers for reading! Adrian 2. Sep 28, 2011 ### dodo Hi, Adrian, it shouldn't be harder than testing if the number is divisible by 2 or by 3; and, in that case, if you are interested in the actual values of A and B, just divide and iterate. Unless you mean really big numbers. 3. Sep 28, 2011 ### RamaWolf Code (Text): Solving with a computer: Factor C  givin a list of it's prime factors and their occurences; if there i9s a factor > 3 then 'No, C is not of the required form' else 'Yes, A and B are the number, the factor 2 (resp 3) occurs Solving with paper and your head: Set A and B to zero Loop2: If C is even replace C by C / 2 andd add 1 to A loop until C is odd Loop3: If C is multiple of 3 (add the digits modulo 3) replace C by C / 3 andd add 1 to B loop until C is not a multiple of 3 Test: it the remaining C is one, then 'Yes' else 'No' Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Continue to Site Bench Power Supply Status Not open for further replies. Ravi Member Dear Friends, The transformer of my power supply (which I used for 4-5 years) has given away recently. I want to build a new PS with some more options.My specifications are: Current rating 1.5 A Ripple less than 0.1% 5 Volts dual rail well regulated/balanced output 12 Volts dual rail well regulated/balanced output Variable regulated output say from 1.25V to 24Volts (single rail) (I know I can use LM337,317 or 350) Digital voltmeter/ammeter. Ckt for blown fuse indicator. I wanted to use IC regulators with minimum number of transistors. In my country it is difficult to obtain 2N...type transistors and multi secondary power transformers. (However, I have some 2N3055 pwr. Trs. in hand) Thanks & regards Ravi kinjalgp Active Member With all that specifications, your Bill of Materials will burn a hole in your pocket. Instead you can buy a ready made dual tracking power supply which will cost you much less than making one yourself. Ravi Member I know I will have to spent lot of money. In that case I may take out the digital meter and go for a normal analog meter. However, electronic hobby is an expensive game. Thanks. kinjalgp Active Member Ok, here are some schematics. You'll have to make some modifications according to your needs. stevez Active Member I've seen several power supply projects where a low cost DVM was used as a panel meter - more or less glued on to the front of the chassis. Watch the sales and they can be had for $10 to$15 here in US - which is about the cost of a panel meter. Status Not open for further replies. Replies 32 Views 2K Replies 5 Views 1K Replies 8 Views 7K Replies 16 Views 4K Replies 3 Views 1K
• • ### 黄土坡面发育平稳的细沟流水动力学特性 1. 1. 中国科学院地理科学与资源研究所 陆地水循环与地表过程重点实验室,北京 100101 2. 华中农业大学 资源与环境学院,武汉 430070 3. 中国科学院教育部 水土保持与生态环境研究中心 黄土高原土壤侵蚀与旱地农业国家重点实验室,陕西 杨凌 712100 • 出版日期:2014-08-25 发布日期:2014-08-25 • 作者简介: 作者简介:王龙生(1988-),男,山东威海人,硕士生,主要研究方向为坡面土壤侵蚀,E-mail:[email protected] • 基金资助: 国家自然科学基金项目(41271304);中国科学院教育部水土保持与生态环境研究中心黄土高原土壤侵蚀与旱地农业国家重点实验室开放基金项目(K318009902-1315) ### Hydrodynamic characteristics of stable growth-rill flow on loess slopes Longsheng WANG1,2(), Qiangguo CAI1, Chongfa CAI2, Liying SUN1,3() 1. 1. Key Laboratory of Water Cycle and Related Land Surface Processes, Institute of Geographic Sciences and Natural Resources Research, CAS, Beijing 100101, China 2. College of Resources and Environment, Huazhong Agricultural University, Wuhan 430070, China 3. State Key Laboratory of Soil Erosion and Dryland Farming on Loess Plateau, Research Center for Soil and Water Conservation and Eco-environmental Sciences, CAS, Yangling 712100, Shaanxi, China • Online:2014-08-25 Published:2014-08-25 Abstract: : Rill erosion is the main way of slope erosion on farmland of the Loess Plateau. Rill erosion on slopes accounts for 70% of the total amount of erosion and plays an important role in soil erosion process on loess hillslopes. It is the beginning of qualitative change in the process of soil erosion. Studies of rill erosion can help control soil erosion on slopes, facilitate the development of agricultural production, and serve as the foundation of study of the development process of soil erosion. Natural rainfall may occur intermittently and rills may go through a second rainfall within a short time period. But studies on the dynamic characteristics of rill flow under these circumstances have been few. This research was carried out in the rainfall simulation lab of the State Key Laboratory of Soil Erosion and Dryland Farming on Loess from March to May 2010. Artificial rainfall was applied on a loess slope at an interval of 24 hours under two rainfall conditions, with the first rain that formed relatively stable rill followed by a light rainfall. The results show that: (1) rill flow velocity was impacted by slope length indistinctively. Average velocity of rill flow on slopes of different lengths was not very different. On the other hand, rill flow velocity was greatly influenced by rill morphology. Compared to 20° slope, rill density of 25° slope was higher, while its flow velocity was lower; (2) shear stress is jointly affected by flow discharge and slope. The Reynolds number and Froude number were significantly associated with flow shear stress-the Reynolds number is positively correlated with flow shear stress, but the Froude number had negative correlation with flow shear stress; (3) As the distance from the top of the hill increases, the Darcy-Weisbach resistance coefficient tends to increase as well. With the same rainfall intensity at the second time, the resistance coefficient of steep slope is higher. This indicates a close relationship between resistance coefficient and runoff and slope. There is a significant positive correlation between resistance coefficient and the Reynolds number. Higher Reynolds number means greater average flow rate and intensity. As a result of increased intensity of water flow, rill morphology is more complex and the flow resistance increases. Although the increased Reynolds number also means that the flow depth is larger, the test results show that the drag coefficient under the condition of steep slope is mainly affected by the velocity of flow. • S157.1
2018 Том 70 № 11 # Three-Dimensional Matrix Superpotentials We consider a special case for curves in two-, three-, and four-dimensional Euclidean spaces and obtain a necessary and sufficient condition for the tensor product surfaces of the planar unit circle centered at the origin and these curves to have a harmonic Gauss map. We present а classification of matrix superpotentials that correspond to exactly solvable systems of Schrodinger equations. Superpotentials of the following form are considered: $W_k = kQ + P \frac 1k$, where $k$ is a parameter and $P, Q$ and $R$ are Hermitian matrices that depend on a variable $x$. The list of three-dimensional matrix superpotentials is explicitly presented.
Products for USB Sensing and Control Products for USB Sensing and Control #### sales inquiries ##### quotes, distributor information, purchase orders [email protected] #### technical inquiries ##### support, advice, warranty, returns, misshipment [email protected] #### website inquiries [email protected] ##### Unit 1 - 6115 4 St SE Calgary AB  T2H 2H9 Canada Projects Dealers Terms and Conditions Discontinued Products Phidget21 Documentation #### PHIDGETS Inc. Unit 1 - 6115 4 St SE Calgary AB  T2H 2H9 +1 403 282-7335 ## Touch Sensor ID: 1129_1B Recommended for new designs: This product (or a similar replacement with a compatible form, fit and function) is estimated to be available for ten years or more. Detect a touch through up to 1/2" of glass or plastic with this low-cost capacitive sensor. ## $9.00 Quantity Available: 1000+ Qty Price 5$8.55 10 $8.10 25$7.20 50 $6.30 100$5.85 Note: The 1129_1B is identical to the 1129_1, except that you have the option of which size of Phidget cable to include. The 1129 is a capacitive touch sensor and can detect a touch through plastic, glass, or paper. We recommend a material thickness of up to 1/2”. The sensor can work as a close proximity sensor, sensing objects at a distance of up to 1/2” from the board in all directions without direct contact. On the bottom side of the Touch Sensor there is a small exposed metallic pad. A soldered connection can be made to the pad to increase the size and dimensions of the touchable area, such as attaching the sensor to a metallic object or some wire. The 1129_1B has a sensing chip that recalibrates after 45 seconds of contact (as opposed to 60 seconds on the 1129_0). Other than that, this sensor will function the same as the 1129_0. Please refer to the table below to see the differences: #### Comparisons with previous versions Feature 1110 1129_0 1129_1 or 1129_1B Sensitivity (Thickness of Material) 1/8" 1/2" 1/2" Sensor value (voltage) when Touching 0 (0V) 1000 (5V) 1000 (5V) Recalibration Timeout N/A 60s 45s #### Interface Boards and Hubs This sensor can be read by any Phidget with an Analog Input or VINT Hub port. It will connect to either one using the included Phidget cable. VINT Hub ports can behave just like Analog Inputs, but have the added flexibility of being able to be used as digital inputs, digital outputs, or ports to communicate with VINT devices. For more information about VINT, see the VINT Primer. Product Voltage Inputs Image Part Number Price Number of Voltage Inputs Voltage Input Resolution 1010_0 $80.00 8 10 bit 1011_0$50.00 2 10 bit 1018_2B $80.00 8 10 bit 1019_1B$110.00 8 10 bit 1203_2B $70.00 8 10 bit DAQ1000_0$20.00 8 12 bit HUB0000_0 $30.00 6 (Shared) * 16 bit HUB5000_0$60.00 6 (Shared) * 16 bit SBC3003_0 $120.00 6 (Shared) * 16 bit #### Phidget Cables This sensor comes with its own Phidget cable to connect it to an InterfaceKit or Hub, but if you need extras we have a full list down below. You can solder multiple cables together in order to make even longer Phidget cables, but you should be aware of the effects of having long wires in your system. Product Physical Properties Image Part Number Price Cable Length 3002_0$2.00 600 mm 3003_0 $1.50 100 mm 3004_0$3.00 3.5 m 3034_0 $1.50 150 mm 3038_0$2.25 1.2 m 3039_0 $2.75 1.8 m CBL4104_0$1.75 300 mm CBL4105_0 $2.00 900 mm CBL4106_0$2.50 1.5 m ## Getting Started Welcome to the 1129 user guide! In order to get started, make sure you have the following hardware on hand: Next, you will need to connect the pieces: 1. Connect the 1129 to the HUB0000 with the Phidget cable. 2. Connect the HUB0000 to your computer with the USB cable. Now that you have everything together, let's start using the 1129! ## Using the 1129 ### Phidget Control Panel In order to demonstrate the functionality of the 1129, we will connect it to the HUB0000, and then run an example using the Phidget Control Panel on a Windows machine. The Phidget Control Panel is available for use on both macOS and Windows machines. If you would like to follow along, first take a look at the getting started guide for your operating system: ### First Look After plugging in the 1129 into the HUB0000, and the HUB0000 into your computer, open the Phidget Control Panel. You will see something like this: The Phidget Control Panel will list all connected Phidgets and associated objects, as well as the following information: • Serial number: allows you to differentiate between similar Phidgets. • Channel: allows you to differentiate between similar objects on a Phidget. • Version number: corresponds to the firmware version your Phidget is running. If your Phidget is listed in red, your firmware is out of date. Update the firmware by double-clicking the entry. The Phidget Control Panel can also be used to test your device. Double-clicking on an object will open an example. ### Voltage Ratio Input Double-click on a Voltage Ratio Input object in order to run the example: General information about the selected object will be displayed at the top of the window. You can also experiment with the following functionality: • Modify the change trigger and/or data interval value by dragging the sliders. For more information on these settings, see the data interval/change trigger page. • Select the 1129 from the Sensor Type drop-down menu. The example will now convert the voltage into a 1 (touch detected) or 0 (touch not detected) automatically. Converting the voltage to a 1 (touch detected) or 0 (touch not detected) is not specific to this example, it is handled by the Phidget libraries, with functions you have access to when you begin developing! ## Technical Details ### General The 1129 is actually a capacitive change sensor. When the capacitance changes the Sensor Value reports 1. If the Sensor Value remains at 1 for longer than 60 seconds, it will recalibrate back down to 0, regardless if the sensor is still being touched. This recalibration can also be done manually by unplugging and plugging the sensor back into the HUB0000 (or compatible product). This is a useful feature because it means that the sensor can be mounted on a flat surface such as a piece of glass or plastic and be reset so that it does not register the change in capacitance caused by the surface it is mounted on. On the bottom side of the 1129 there is a small exposed metallic pad. A soldered connection can be made to the pad to increase the size and dimensions of the touchable area, such as attaching the sensor to a metallic object or some wire. Once the sensor is recalibrated, the Sensor Value will increase to 1 if the attached object is touched anywhere. Although there is an exposed metallic pad on the bottom of the board, the pad does not have to be touched directly to activate the sensor - touching anywhere on the board will activate the sensor. The 1129 can work as a close proximity sensor, sensing objects at a distance of up to 1/2" from the board in all directions without direct contact. The 1129 will also work through a thickness of up to 1/2" of glass, plastic, or paper. ### Customizing Sensitivity The capacitor labelled C1 is the sensing capacitor. In other words this is the capacitor that determines how sensitive the sensor is. By default it is a 10nF capacitor. By changing this value we can adjust the sensitivity. Smaller capacitors will yield lower sensitivity while higher values will yield higher sensitivity. The integrated circuit (IC) on the 1129 specifies that the acceptable range for C1 is 2nF-50nF. In testing, 2nF will not sense through any thickness of material while 50nF can sense through over 1" of material and even through double paned windows with an air gap of up to 3/4". If the size of your capacitive touch surface is very large, you may have to increase C1 even more. Up to 100nF even. (C1 is soldered onto the board, in order to replace it you will need a soldering iron). ### Phidget Cable The Phidget Cable is a 3-pin, 0.100 inch pitch locking connector. Pictured here is a plug with the connections labelled. The connectors are commonly available - refer to the Analog Input Primer for manufacturer part numbers. ## What to do Next • Programming Languages - Find your preferred programming language here and learn how to write your own code with Phidgets! • Phidget Programming Basics - Once you have set up Phidgets to work with your programming environment, we recommend you read our page on to learn the fundamentals of programming with Phidgets. #### Product Specifications Sensor Properties Sensor Type Touch (Capacitive) Controlled By VoltageRatio Input Sensor Output Type Ratiometric Detecting Distance Max 12.7 mm Electrical Properties Current Consumption Max 500 μA Output Impedance 1 kΩ Supply Voltage Min 1.8 V DC Supply Voltage Max 5.5 V DC Physical Properties Operating Temperature Min -40 °C Operating Temperature Max 85 °C Customs Information American HTS Import Code 8473.30.51.00 Country of Origin CN (China) #### Product History Date Board Revision Device Version Packaging Revision Comment November 2009 0N/A Product Release September 2012 1N/A Redesign due to component obsolescence December 2017 1N/A BRemoved Phidget cable from packaging This device doesn't have an API of its own. It is controlled by opening a VoltageRatioInput channel on the Phidget that it's connected to. For a list of compatible Phidgets with VoltageRatio Inputs, see the Connection & Compatibility tab. You can find details for the VoltageRatioInput API on the API tab for the Phidget that this sensor connects to. #### Code Samples Choose a Language Choose a Device Serial Number: ? #### Example Options Make your selections to display sample code. 1016_0 $25.00 USB (Mini-USB) 10 mm 1129_1B$9.00 VoltageRatio Input 12.7 mm HIN1000_0 $12.00 VINT 5 mm HIN1001_0$15.00 VINT 5 mm
• J-PLUS is an ongoing 12-band photometric optical survey, observing thousands of square degrees of the Northern hemisphere from the dedicated JAST/T80 telescope at the Observatorio Astrof\'isico de Javalambre. T80Cam is a 2 sq.deg field-of-view camera mounted on this 83cm-diameter telescope, and is equipped with a unique system of filters spanning the entire optical range. This filter system is a combination of broad, medium and narrow-band filters, optimally designed to extract the rest-frame spectral features (the 3700-4000\AA\ Balmer break region, H$\delta$, Ca H+K, the G-band, the Mgb and Ca triplets) that are key to both characterize stellar types and to deliver a low-resolution photo-spectrum for each pixel of the sky observed. With a typical depth of AB $\sim 21.25$ mag per band, this filter set thus allows for an indiscriminate and accurate characterization of the stellar population in our Galaxy, it provides an unprecedented 2D photo-spectral information for all resolved galaxies in the local universe, as well as accurate photo-z estimates ($\Delta\,z\sim 0.01-0.03$) for moderately bright (up to $r\sim 20$ mag) extragalactic sources. While some narrow band filters are designed for the study of particular emission features ([OII]/$\lambda$3727, H$\alpha$/$\lambda$6563) up to $z < 0.015$, they also provide well-defined windows for the analysis of other emission lines at higher redshifts. As a result, J-PLUS has the potential to contribute to a wide range of fields in Astrophysics, both in the nearby universe (Milky Way, 2D IFU-like studies, stellar populations of nearby and moderate redshift galaxies, clusters of galaxies) and at high redshifts (ELGs at $z\approx 0.77, 2.2$ and $4.4$, QSOs, etc). With this paper, we release $\sim 36$ sq.deg of J-PLUS data, containing about $1.5\times 10^5$ stars and $10^5$ galaxies at $r<21$ mag. • The very-high-energy (VHE, $\gtrsim 100$ GeV) $\gamma$-ray MAGIC observations of the blazar S4 0954+65, were triggered by an exceptionally high flux state of emission in the optical. This blazar has a disputed redshift of z=0.368 or z$\geqslant$0.45 and an uncertain classification among blazar subclasses. The exceptional source state described here makes for an excellent opportunity to understand physical processes in the jet of S4 0954+65 and thus contribute to its classification. We investigate the multiwavelength (MWL) light curve and spectral energy distribution (SED) of the S4 0954+65 blazar during an enhanced state in February 2015 and put it in context with possible emission scenarios. We collect photometric data in radio, optical, X-ray, and $\gamma$ ray. We study both the optical polarization and the inner parsec-scale jet behavior with 43 GHz data. Observations with the MAGIC telescopes led to the first detection of S4 0954+65 at VHE. Simultaneous data with Fermi-LAT at high energy $\gamma$ ray\ (HE, 100 MeV < E < 100 GeV) also show a period of increased activity. Imaging at 43 GHz reveals the emergence of a new feature in the radio jet in coincidence with the VHE flare. Simultaneous monitoring of the optical polarization angle reveals a rotation of approximately 100$^\circ$. (...) The broadband spectrum can be modeled with an emission mechanism commonly invoked for flat spectrum radio quasars, i.e. inverse Compton scattering on an external soft photon field from the dust torus, also known as external Compton. The light curve and SED phenomenology is consistent with an interpretation of a blob propagating through a helical structured magnetic field and eventually crossing a standing shock in the jet, a scenario typically applied to flat spectrum radio quasars (FSRQs) and low-frequency peaked BL Lac objects (LBL). • ### Stellar laboratories. IX. New Se V, Sr IV - VII, Te VI, and I VI oscillator strengths and the Se, Sr, Te, and I abundances in the hot white dwarfs G191-B2B and RE 0503-289(1706.09215) June 28, 2017 astro-ph.SR To analyze spectra of hot stars, advanced non-local thermodynamic equilibrium (NLTE) model-atmosphere techniques are mandatory. Reliable atomic data is for the calculation of such model atmospheres. We aim to calculate new Sr IV - VII oscillator strengths to identify for the first time Sr spectral lines in hot white dwarf (WD) stars and to determine the photospheric Sr abundances. o measure the abundances of Se, Te, and I in hot WDs, we aim to compute new Se V, Te VI, and I VI oscillator strengths. To consider radiative and collisional bound-bound transitions of Se V, Sr IV - VII, Te VI, and I VI in our NLTE atmosphere models, we calculated oscillator strengths for these ions. We newly identified four Se V, 23 Sr V, 1 Te VI, and three I VI lines in the ultraviolet (UV) spectrum of RE0503-289. We measured a photospheric Sr abundance of 6.5 +3.8/-2.4 x 10**-4 (mass fraction, 9500 - 23800 times solar). We determined the abundances of Se (1.6 +0.9/-0.6 x 10**-3, 8000 - 20000), Te (2.5 +1.5/-0.9 x 10**-4, 11000 - 28000), and I (1.4 +0.8/-0.5 x 10**-5, 2700 - 6700). No Se, Sr, Te, and I line was found in the UV spectra of G191-B2B and we could determine only upper abundance limits of approximately 100 times solar. All identified Se V, Sr V, Te VI, and I VI lines in the UV spectrum of RE0503-289 were simultaneously well reproduced with our newly calculated oscillator strengths. • ### Hydrogen Balmer Line Broadening in Solar and Stellar Flares(1702.03321) Feb. 10, 2017 astro-ph.SR The broadening of the hydrogen lines during flares is thought to result from increased charge (electron, proton) density in the flare chromosphere. However, disagreements between theory and modeling prescriptions have precluded an accurate diagnostic of the degree of ionization and compression resulting from flare heating in the chromosphere. To resolve this issue, we have incorporated the unified theory of electric pressure broadening of the hydrogen lines into the non-LTE radiative transfer code RH. This broadening prescription produces a much more realistic spectrum of the quiescent, A0 star Vega compared to the analytic approximations used as a damping parameter in the Voigt profiles. We test recent radiative-hydrodynamic (RHD) simulations of the atmospheric response to high nonthermal electron beam fluxes with the new broadening prescription and find that the Balmer lines are over-broadened at the densest times in the simulations. Adding many simultaneously heated and cooling model loops as a "multithread" model improves the agreement with the observations. We revisit the three-component phenomenological flare model of the YZ CMi Megaflare using recent and new RHD models. The evolution of the broadening, line flux ratios, and continuum flux ratios are well-reproduced by a multithread model with high-flux nonthermal electron beam heating, an extended decay phase model, and a "hot spot" atmosphere heated by an ultrarelativistic electron beam with reasonable filling factors: 0.1%, 1%, and 0.1% of the visible stellar hemisphere, respectively. The new modeling motivates future work to understand the origin of the extended gradual phase emission. • ### Stellar laboratories. VIII. New Zr IV - VII, Xe IV - V, and Xe VII oscillator strengths and the Al, Zr, and Xe abundances in the hot white dwarfs G191-B2B and RE0503-289(1611.07364) Nov. 21, 2016 physics.atom-ph, astro-ph.SR For the spectral analysis of high-resolution and high-signal-to-noise spectra of hot stars, state-of-the-art non-local thermodynamic equilibrium (NLTE) model atmospheres are mandatory. These are strongly dependent on the reliability of the atomic data that is used for their calculation. To search for Zr and Xe lines in the ultraviolet (UV) spectra of G191-B2B and RE0503-289, new Zr IV-VII, Xe IV-V, and Xe VIII oscillator strengths were calculated. This allows for the first time, determination of the Zr abundance in white dwarf (WD) stars and improvement of the Xe abundance determinations. We calculated Zr IV-VII, Xe IV-V, and Xe VIII oscillator strengths to consider radiative and collisional bound-bound transitions of Zr and Xe in our NLTE stellar-atmosphere models for the analysis of their lines exhibited in UV observations of the hot WDs G191-B2B and RE0503-289. We identified one new Zr IV, 14 new Zr V, and ten new Zr VI lines in the spectrum of RE0503-289. Zr was detected for the first time in a WD. We measured a Zr abundance of -3.5 +/- 0.2 (logarithmic mass fraction, approx. 11 500 times solar). We dentified five new Xe VI lines and determined a Xe abundance of -3.9 +/- 0.2 (approx. 7500 times solar). We determined a preliminary photospheric Al abundance of -4.3 +/- 0.2 (solar) in RE0503-289. In the spectra of G191-B2B, no Zr line was identified. The strongest Zr IV line (1598.948 A) in our model gave an upper limit of -5.6 +/- 0.3 which is about 100 times solar. No Xe line was identified in the UV spectrum of G191-B2B and we confirmed the previously determined upper limit of -6.8 +/- 0.3 (ten times solar). Precise measurements and calculations of atomic data are a prerequisite for advanced NLTE stellar-atmosphere modeling. Observed Zr IV - VI and Xe VI - VII line profiles in the UV spectrum of RE0503-289 were simultaneously well reproduced. • ### Complete spectral energy distribution of the hot, helium-rich white dwarf RX J0503.9-2854(1610.09177) Oct. 28, 2016 astro-ph.SR In the line-of-sight toward the DO-type white dwarf RX J0503.9-2854, the density of the interstellar medium (ISM) is very low, and thus the contamination of the stellar spectrum almost negligible. This allows us to identify many metal lines in a wide wavelength range from the extreme ultraviolet to the near infrared. In previous spectral analyses, many metal lines in the ultraviolet spectrum of RX J0503.9-2854 have been identified. A complete line list of observed and identified lines is presented here. We compared synthetic spectra that had been calculated from model atmospheres in non-local thermodynamical equilibrium, with observations. In total, we identified 1272 lines (279 of them were newly assigned) in the wavelength range from the extreme ultraviolet to the near infrared. 287 lines remain unidentified. A close inspection of the EUV shows that still no good fit to the observed shape of the stellar continuum flux can be achieved although He, C, N, O, Al, Si, P, S, Ca, Sc, Ti, V, Cr, Mn, Fe, Cr, Ni Zn, Ga, Ge, As, Kr, Zr, Mo, Sn, Xe, and Ba are included in the stellar atmosphere models. There are two possible reasons for the deviation between observed and synthetic flux in the EUV. Opacities from hitherto unconsidered elements in the model-atmosphere calculation may be missing and/or the effective temperature is slightly lower than previously determined. • The MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov) telescopes observed the BL Lac object H1722+119 (redshift unknown) for six consecutive nights between 2013 May 17 and 22, for a total of 12.5 h. The observations were triggered by high activity in the optical band measured by the KVA (Kungliga Vetenskapsakademien) telescope. The source was for the first time detected in the very high energy (VHE, $E > 100$ GeV) $\gamma$-ray band with a statistical significance of 5.9 $\sigma$. The integral flux above 150 GeV is estimated to be $(2.0\pm 0.5)$ per cent of the Crab Nebula flux. We used contemporaneous high energy (HE, 100 MeV $< E < 100$ GeV) $\gamma$-ray observations from Fermi-LAT (Large Area Telescope) to estimate the redshift of the source. Within the framework of the current extragalactic background light models, we estimate the redshift to be $z = 0.34 \pm 0.15$. Additionally, we used contemporaneous X-ray to radio data collected by the instruments on board the Swift satellite, the KVA, and the OVRO (Owens Valley Radio Observatory) telescope to study multifrequency characteristics of the source. We found no significant temporal variability of the flux in the HE and VHE bands. The flux in the optical and radio wavebands, on the other hand, did vary with different patterns. The spectral energy distribution (SED) of H1722+119 shows surprising behaviour in the $\sim 3\times10^{14} - 10^{18}$ Hz frequency range. It can be modelled using an inhomogeneous helical jet synchrotron self-Compton model. • ### Stellar laboratories. VII. New Kr IV - VII oscillator strengths and an improved spectral analysis of the hot, hydrogen-deficient DO-type white dwarf RE0503-289(1603.00701) March 2, 2016 physics.atom-ph, astro-ph.SR For the spectral analysis of high-resolution and high-signal-to-noise (S/N) spectra of hot stars, state-of-the-art non-local thermodynamic equilibrium (NLTE) model atmospheres are mandatory. These are strongly dependent on the reliability of the atomic data that is used for their calculation. New of Kr IV - VII oscillator strengths for a large number of lines allow to construct more detailed model atoms for our NLTE model-atmosphere calculations. This enables us to search for additional Kr lines in observed spectra and to improve Kr abundance determinations. We calculated Kr IV - VII oscillator strengths to consider radiative and collisional bound-bound transitions in detail in our NLTE stellar-atmosphere models for the analysis of Kr lines exhibited in high-resolution and high-S/N ultraviolet (UV) observations of the hot white dwarf RE 0503-289. We reanalyzed the effective temperature and surface gravity and determined Teff = 70 000 +/- 2000 K and log (g / cm/s**2) = 7.5 +/- 0.1. We newly identified ten Kr V lines and one Kr VI line in the spectrum of RE 0503-289. We measured a Kr abundance of -3.3 +/- 0.3 (logarithmic mass fraction). We discovered that the interstellar absorption toward RE 0503-289 has a multi-velocity structure within a radial-velocity interval of -40 km/s < vrad < +18 km/s. Reliable measurements and calculations of atomic data are a prerequisite for state-of-the-art NLTE stellar-atmosphere modeling. Observed Kr V - VII line profiles in the UV spectrum of the white dwarf RE 0503-289 were simultaneously well reproduced with our newly calculated oscillator strengths. • ### Stellar laboratories. VI. New Mo IV - VII oscillator strengths and the molybdenum abundance in the hot white dwarfs G191-B2B and RE0503-289(1512.07525) Dec. 23, 2015 physics.atom-ph, astro-ph.SR For the spectral analysis of high-resolution and high-signal-to-noise (S/N) spectra of hot stars, state-of-the-art non-local thermodynamic equilibrium (NLTE) model atmospheres are mandatory. These are strongly dependent on the reliability of the atomic data that is used for their calculation. To identify molybdenum lines in the ultraviolet (UV) spectra of the DA-type white dwarf G191-B2B and the DO-type white dwarf RE0503-289 and to determine their photospheric Mo abundances, newly calculated Mo IV - VII oscillator strengths are used. We identified twelve Mo V and nine Mo VI lines in the UV spectrum of RE0503-289 and measured a photospheric Mo abundance of 1.2 - 3.0 x 10**-4 (mass fraction, 22500 - 56400 times the solar abundance). In addition, from the As V and Sn IV resonance lines, we measured mass fractions of arsenic (0.5 - 1.3 x 10**-5, about 300 - 1200 times solar) and tin (1.3 - 3.2 x 10**-4, about 14300 35200 times solar). For G191-B2B, upper limits were determined for the abundances of Mo (5.3 x 10**-7, 100 times solar) and, in addition, for Kr (1.1 x 10**-6, 10 times solar) and Xe (1.7 x 10**-7, 10 times solar). The arsenic abundance was determined (2.3 - 5.9 x 10**-7, about 21 - 53 times solar). A new, registered German Astrophysical Virtual Observatory (GAVO) service, TOSS, has been constructed to provide weighted oscillator strengths and transition probabilities. Reliable measurements and calculations of atomic data are a prerequisite for stellar-atmosphere modeling. Observed Mo V - VI line profiles in the UV spectrum of the white dwarf RE0503-289 were well reproduced with our newly calculated oscillator strengths. For the first time, this allowed to determine the photospheric Mo abundance in a white dwarf. • ### The XMM deep survey in the CDF-S. IX. An X-ray outflow in a luminous obscured quasar at z~1.6(1509.05413) Sept. 17, 2015 astro-ph.GA, astro-ph.HE In active galactic nuclei (AGN)-galaxy co-evolution models, AGN winds and outflows are often invoked to explain why super-massive black holes and galaxies stop growing efficiently at a certain phase of their lives. They are commonly referred to as the leading actors of feedback processes. Evidence of ultra-fast (v>0.05c) outflows in the innermost regions of AGN has been collected in the past decade by sensitive X-ray observations for sizable samples of AGN, mostly at low redshift. Here we present ultra-deep XMM-Newton and Chandra spectral data of an obscured (Nh~2x10^{23} cm^-2), intrinsically luminous (L2-10keV~4x10^{44} erg/s) quasar (named PID352) at z~1.6 (derived from the X-ray spectral analysis) in the Chandra Deep Field-South. The source is characterized by an iron emission and absorption line complex at observed energies of E~2-3 keV. While the emission line is interpreted as being due to neutral iron (consistent with the presence of cold absorption), the absorption feature is due to highly ionized iron transitions (FeXXV, FeXXVI) with an outflowing velocity of 0.14^{+0.02}_{-0.06}c, as derived from photoionization models. The mass outflow rate - ~2 Msun/yr - is similar to the source accretion rate, and the derived mechanical energy rate is ~9.5x10^{44} erg/s, corresponding to 9% of the source bolometric luminosity. PID352 represents one of the few cases where indications of X-ray outflowing gas have been observed at high redshift thus far. This wind is powerful enough to provide feedback on the host galaxy. • ### NuSTAR discovery of an unusually steady long-term spin-up of the Be binary 2RXP J130159.6-635806(1507.04534) July 16, 2015 astro-ph.HE We present spectral and timing analysis of NuSTAR observations of the accreting X-ray pulsar 2RXP J130159.6-635806. The source was serendipitously observed during a campaign focused on the gamma-ray binary PSR B1259-63 and was later targeted for a dedicated observation. The spectrum has a typical shape for accreting X-ray pulsars, consisting of a simple power law with an exponential cutoff starting at ~7 keV with a folding energy of E_fold=~18 keV. There is also an indication of the presence of a 6.4 keV iron line in the spectrum at the ~3 sigma significance level. NuSTAR measurements of the pulsation period reveal that the pulsar has undergone a strong and steady spin-up for the last 20 years. The pulsed fraction is estimated to be ~80%, and is constant with energy up to 40 keV. The power density spectrum shows a break towards higher frequencies relative to the current spin period. This, together with steady persistent luminosity, points to a long-term mass accretion rate high enough to bring the pulsar out of spin equilibrium. • We performed a 4.5-month multi-instrument campaign (from radio to VHE gamma rays) on Mrk421 between January 2009 and June 2009, which included VLBA, F-GAMMA, GASP-WEBT, Swift, RXTE, Fermi-LAT, MAGIC, and Whipple, among other instruments and collaborations. Mrk421 was found in its typical (non-flaring) activity state, with a VHE flux of about half that of the Crab Nebula, yet the light curves show significant variability at all wavelengths, the highest variability being in the X-rays. We determined the power spectral densities (PSD) at most wavelengths and found that all PSDs can be described by power-laws without a break, and with indices consistent with pink/red-noise behavior. We observed a harder-when-brighter behavior in the X-ray spectra and measured a positive correlation between VHE and X-ray fluxes with zero time lag. Such characteristics have been reported many times during flaring activity, but here they are reported for the first time in the non-flaring state. We also observed an overall anti-correlation between optical/UV and X-rays extending over the duration of the campaign. The harder-when-brighter behavior in the X-ray spectra and the measured positive X-ray/VHE correlation during the 2009 multi-wavelength campaign suggests that the physical processes dominating the emission during non-flaring states have similarities with those occurring during flaring activity. In particular, this observation supports leptonic scenarios as being responsible for the emission of Mrk421 during non-flaring activity. Such a temporally extended X-ray/VHE correlation is not driven by any single flaring event, and hence is difficult to explain within the standard hadronic scenarios. The highest variability is observed in the X-ray band, which, within the one-zone synchrotron self-Compton scenario, indicates that the electron energy distribution is most variable at the highest energies. • ### Stellar laboratories IV. New Ga IV, Ga V, and Ga VI oscillator strengths and the gallium abundance in the hot white dwarfs G191-B2B and RE0503-289(1501.07751) Jan. 30, 2015 physics.atom-ph, astro-ph.SR For the spectral analysis of high-resolution and high-signal-to-noise (S/N) spectra of hot stars, advanced non-local thermodynamic equilibrium (NLTE) model atmospheres are mandatory. These atmospheres are strongly dependent on the reliability of the atomic data that are used to calculate them. Reliable Ga IV - VI oscillator strengths are used to identify Ga lines in the spectra of the DA-type white dwarf G191-B2B and the DO-type white dwarf RE0503-289 and to determine their photospheric Ga abundances. We newly calculated Ga IV - VI oscillator strengths to consider their radiative and collisional bound-bound transitions in detail in our NLTE stellar-atmosphere models for analyzing of Ga lines exhibited in high-resolution and high-S/N UV observations of G191-B2B and RE0503-289. We unambiguously detected 20 isolated and 6 blended (with lines of other species) Ga V lines in the Far Ultraviolet Spectroscopic Explorer (FUSE) spectrum of RE0503-289. The identification of Ga IV and Ga VI lines is uncertain because they are weak and partly blended by other lines. The determined Ga abundance is 3.5 +/- 0.5 x 10**-5 (mass fraction, about 625 times solar). The Ga IV / GA V ionization equilibrium, which is a very sensitive indicator for the effective temperature, is well reproduced in RE0503-289. We identified the strongest Ga IV lines (1258.801, 1338.129 A) in the HST/STIS (Hubble Space Telescope / Space Telescope Imaging Spectrograph) spectrum of G191-B2B and measured a Ga abundance of 2.0 +/- 0.5 x 10**-6 (about 22 times solar). Reliable measurements and calculations of atomic data are a prerequisite for stellar-atmosphere modeling. Observed Ga IV - V line profiles in two white dwarf (G191-B2B and RE0503-289) ultraviolet spectra were well reproduced with our newly calculated oscillator strengths. For the first time, this allowed us to determine the photospheric Ga abundance in white dwarfs. • ### Modelling the $\gamma$-ray and radio light curves of the double pulsar system(1411.0881) Nov. 4, 2014 astro-ph.HE Guillemot et al. recently reported the discovery of $\gamma$-ray pulsations from the 22.7ms pulsar (pulsar A) in the famous double pulsar system J0737-3039A/B. The $\gamma$-ray light curve (LC) of pulsar A has two peaks separated by approximately half a rotation, and these are non-coincident with the observed radio and X-ray peaks. This suggests that the $\gamma$-ray emission originates in a part of the magnetosphere distinct from where the radio and X-ray radiation is generated. Thus far, three different methods have been applied to constrain the viewing geometry of pulsar A (its inclination and observer angles $\alpha$ and $\zeta$): geometric modelling of the radio and $\gamma$-ray light curves, modelling of the position angle sweep in phase seen in the radio polarisation data, and independent studies of the time evolution of the radio pulse profile of pulsar A. These three independent, complementary methods have yielded consistent results: pulsar A's rotation axis is likely perpendicular to the orbital plane of the binary system, and its magnetic axis close to lying in the orbital plane (making this pulsar an orthogonal rotator). The observer is furthermore observing emission close to the magnetic axis. Thus far, however, current models could not reproduce all the characteristics of the radio and $\gamma$-ray light curves, specifically the large radio-to-$\gamma$ phase lag. In this paper we discuss some preliminary modelling attempts to address this problem, and offer ideas on how the LC fits may be improved by adapting the standard geometric models in order to reproduce the profile positions more accurately. • The discovery of rapidly variable Very High Energy (VHE; E > 100 GeV) gamma-ray emission from 4C +21.35 (PKS 1222+216) by MAGIC on 2010 June 17, triggered by the high activity detected by the Fermi Large Area Telescope (LAT) in high energy (HE; E > 100 MeV) gamma-rays, poses intriguing questions on the location of the gamma-ray emitting region in this flat spectrum radio quasar. We present multifrequency data of 4C +21.35 collected from centimeter to VHE during 2010 to investigate the properties of this source and discuss a possible emission model. The first hint of detection at VHE was observed by MAGIC on 2010 May 3, soon after a gamma-ray flare detected by Fermi-LAT that peaked on April 29. The same emission mechanism may therefore be responsible for both the HE and VHE emission during the 2010 flaring episodes. Two optical peaks were detected on 2010 April 20 and June 30, close in time but not simultaneous with the two gamma-ray peaks, while no clear connection was observed between the X-ray an gamma-ray emission. An increasing flux density was observed in radio and mm bands from the beginning of 2009, in accordance with the increasing gamma-ray activity observed by Fermi-LAT, and peaking on 2011 January 27 in the mm regime (230 GHz). We model the spectral energy distributions (SEDs) of 4C +21.35 for the two periods of the VHE detection and a quiescent state, using a one-zone model with the emission coming from a very compact region outside the broad line region. The three SEDs can be fit with a combination of synchrotron self-Compton and external Compton emission of seed photons from a dust torus, changing only the electron distribution parameters between the epochs. The fit of the optical/UV part of the spectrum for 2010 April 29 seems to favor an inner disk radius of <6 gravitational radii, as one would expect from a prograde-rotating Kerr black hole. • The discovery of rapidly variable Very High Energy (VHE; E > 100 GeV) gamma-ray emission from 4C +21.35 (PKS 1222+216) by MAGIC on 2010 June 17, triggered by the high activity detected by the Fermi Large Area Telescope (LAT) in high energy (HE; E > 100 MeV) gamma-rays, poses intriguing questions on the location of the gamma-ray emitting region in this flat spectrum radio quasar. We present multifrequency data of 4C +21.35 collected from centimeter to VHE during 2010 to investigate the properties of this source and discuss a possible emission model. The first hint of detection at VHE was observed by MAGIC on 2010 May 3, soon after a gamma-ray flare detected by Fermi-LAT that peaked on April 29. The same emission mechanism may therefore be responsible for both the HE and VHE emission during the 2010 flaring episodes. Two optical peaks were detected on 2010 April 20 and June 30, close in time but not simultaneous with the two gamma-ray peaks, while no clear connection was observed between the X-ray an gamma-ray emission. An increasing flux density was observed in radio and mm bands from the beginning of 2009, in accordance with the increasing gamma-ray activity observed by Fermi-LAT, and peaking on 2011 January 27 in the mm regime (230 GHz). We model the spectral energy distributions (SEDs) of 4C +21.35 for the two periods of the VHE detection and a quiescent state, using a one-zone model with the emission coming from a very compact region outside the broad line region. The three SEDs can be fit with a combination of synchrotron self-Compton and external Compton emission of seed photons from a dust torus, changing only the electron distribution parameters between the epochs. The fit of the optical/UV part of the spectrum for 2010 April 29 seems to favor an inner disk radius of <6 gravitational radii, as one would expect from a prograde-rotating Kerr black hole. • ### On helium-dominated stellar evolution: the mysterious role of the O(He)-type stars(1405.1589) May 7, 2014 astro-ph.SR About a quarter of all post-asymptotic giant branch (AGB) stars are hydrogen-deficient. Stellar evolutionary models explain the carbon-dominated H-deficient stars by a (very) late thermal pulse scenario where the hydrogen-rich envelope is mixed with the helium-rich intershell layer. Depending on the particular time at which the final flash occurs, the entire hydrogen envelope may be burned. In contrast, helium-dominated post-AGB stars and their evolution are yet not understood. A small group of very hot, helium-dominated stars is formed by O(He)-type stars. We performed a detailed spectral analysis of ultraviolet and optical spectra of four O(He) stars by means of state-of-the-art non-LTE model-atmosphere techniques. We determined effective temperatures, surface gravities, and the abundances of H, He, C, N, O, F, Ne, Si, P, S, Ar, and Fe. By deriving upper limits for the mass-loss rates of the O(He) stars, we found that they do not exhibit enhanced mass-loss. The comparison with evolutionary models shows that the status of the O(He) stars remains uncertain. Their abundances match predictions of a double helium white dwarf merger scenario, suggesting that they might be the progeny of the compact and of the luminous helium-rich sdO-type stars. The existence of planetary nebulae that do not show helium enrichment around every other O(He) star, precludes a merger origin for these stars. These stars must have formed in a different way, for instance via enhanced mass-loss during their post-AGB evolution or a merger within a common-envelope (CE) of a CO-WD and a red giant or AGB star. A helium-dominated stellar evolutionary sequence exists, that may be fed by different types of mergers or CE scenarios. It appears likely, that all these pass through the O(He) phase just before they become white dwarfs. • ### Stellar laboratories III. New Ba V, Ba VI, and Ba VII oscillator strengths and the barium abundance in the hot white dwarfs G191-B2B and RE0503-289(1404.6094) April 24, 2014 physics.atom-ph, astro-ph.SR For the spectral analysis of high-resolution and high-signal-to-noise (S/N) spectra of hot stars, state-of-the-art non-local thermodynamic equilibrium (NLTE) model atmospheres are mandatory. These are strongly dependent on the reliability of the atomic data that is used for their calculation. Reliable Ba V - VII oscillator strengths are used to identify Ba lines in the spectra of the DA-type white dwarf G191-B2B and the DO-type white dwarf RE0503-289 and to determine their photospheric Ba abundances. We newly calculated Ba V - VII oscillator strengths to consider their radiative and collisional bound-bound transitions in detail in our NLTE stellar-atmosphere models for the analysis of Ba lines exhibited in high-resolution and high-S/N UV observations of G191-B2B and RE0503-289. For the first time, we identified highly ionized Ba in the spectra of hot white dwarfs. We detected Ba VI and Ba VII lines in the Far Ultraviolet Spectroscopic Explorer (FUSE) spectrum of RE0503-289. The Ba VI / Ba VII ionization equilibrium is well reproduced with the previously determined effective temperature of 70000 K and surface gravity of $\log g = 7.5$. The Ba abundance is $3.5 \pm 0.5 \times 10^{-4}$ (mass fraction, about 23000 times the solar value). In the FUSE spectrum of G191-B2B, we identified the strongest Ba VII line (at 993.41 \AA) only, and determined a Ba abundance of $4.0 \pm 0.5 \times 10^{-6}$ (about 265 times solar). Reliable measurements and calculations of atomic data are a pre-requisite for stellar-atmosphere modeling. Observed Ba VI - VII line profiles in two white dwarfs' (G191-B2B and RE0503-289) far-ultraviolet spectra were well reproduced with our newly calculated oscillator strengths. This allowed to determine the photospheric Ba abundance of these two stars precisely. • ### The virtual observatory service TheoSSA: Establishing a database of synthetic stellar flux standards. II. NLTE spectral analysis of the OB-type subdwarf Feige 110(1404.2446) April 9, 2014 astro-ph.SR In the framework of the Virtual Observatory (VO), the German Astrophysical Virtual Observatory (GAVO) developed the registered service TheoSSA (Theoretical Stellar Spectra Access). It provides easy access to stellar spectral energy distributions (SEDs) and is intended to ingest SEDs calculated by any model-atmosphere code, generally for all effective temperature, surface gravities, and elemental compositions. We will establish a database of SEDs of flux standards that are easily accessible via TheoSSA's web interface. The OB-type subdwarf Feige 110 is a standard star for flux calibration. State-of-the-art non-local thermodynamic equilibrium (NLTE) stellar-atmosphere models that consider opacities of species up to trans-iron elements will be used to provide a reliable synthetic spectrum to compare with observations. In case of Feige 110, we demonstrate that the model reproduces not only its overall continuum shape from the far-ultraviolet (FUV) to the optical wavelength range but also the numerous metal lines exhibited in its FUV spectrum. We present a state-of-the-art spectral analysis of Feige 110. We determined $T_\mathrm{eff} = 47\,250 \pm 2000\,\mathrm{K}$, $\log g = 6.00 \pm 0.20$ and the abundances of He, N, P, S, Ti, V, Cr, Mn, Fe, Co, Ni, Zn, and Ge. Ti, V, Mn, Co, Zn, and Ge were identified for the first time in this star. Upper abundance limits were derived for C, O, Si, Ca, and Sc. The TheoSSA database of theoretical SEDs of stellar flux standards guarantees that the flux calibration of astronomical data and cross-calibration between different instruments can be based on models and SEDs calculated with state-of-the-art model-atmosphere codes. • ### Stellar laboratories II. New Zn IV and Zn V oscillator strengths and their validation in the hot white dwarfs G191-B2B and RE0503-289(1403.2183) March 10, 2014 physics.atom-ph, astro-ph.SR For the spectral analysis of high-resolution and high-signal-to-noise spectra of hot stars, state-of-the-art non-local thermodynamic equilibrium (NLTE) model-atmospheres are mandatory. These are strongly dependent on the reliability of the atomic data that is used for their calculation. In a recent analysis of the ultraviolet (UV) spectrum of the DA-type white dwarf G191-B2B, 21 Zn IV lines were newly identified. Because of the lack of Zn IV data, transition probabilities of the isoelectronic Ge VI were adapted for a first, coarse determination of the photospheric Zn abundance. We performed new calculations of Zn IV and Zn V oscillator strengths to consider their radiative and collisional bound-bound transitions in detail in our NLTE stellar-atmosphere models for the analysis of the Zn IV - V spectrum exhibited in high-resolution and high-S/N UV observations of G191-B2B and RE0503-289. In the UV spectrum of G191-B2B, we identify 31 Zn IV and 16 Zn V lines. Most of these are identified for the first time in any star. We can reproduce well almost all of them at log Zn = -5.52 +/- 0.2 (mass fraction, about 1.7 times solar). In particular, the Zn IV / Zn V ionization equilibrium, which is a very sensitive indicator for the effective temperature, is well reproduced with the previously determined Teff = 60000 +/- 2000 and log g = 7.60 +/- 0.05. In the spectrum of RE0503-289, we identified 128 Zn V lines for the first time and determined log Zn = -3.57 +/- 0.2 (155 times solar). Reliable measurements and calculations of atomic data are a pre-requisite for stellar-atmosphere modeling. Observed Zn IV and Zn V line profiles in two white dwarf (G191-B2B and RE0503-289) ultraviolet spectra were well reproduced with our newly calculated oscillator strengths. This allowed us to determine the photospheric Zn abundance of these two stars precisely. • [Abridged] Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby binary aCentauri have higher than solar metallicities, which is thought to promote giant planet formation. We aim to determine the level of emission from debris in the aCen system. Having already detected the temperature minimum, Tmin, of aCenA, we here attempt to do so also for the companion aCenB. Using the aCen stars as templates, we study possible effects Tmin may have on the detectability of unresolved dust discs around other stars. We use Herschel and APEX photometry to determine the stellar spectral energy distributions. In addition, we use APEX for spectral line mapping to study the complex background around aCen seen in the photometric images. Models of stellar atmospheres and discs are used to estimate the amount of debris around these stars. For solar-type stars, a fractional dust luminosity fd 2e-7 could account for SEDs that do not exhibit the Tmin-effect. Slight excesses at the 2.5 sigma level are observed at 24 mu for both stars, which, if interpreted to be due to dust, would correspond to fd (1-3)e-5. Dynamical disc modelling leads to rough mass estimates of the putative Zodi belts around the aCen stars, viz. <~4e-6 MMoon of 4 to 1000 mu size grains, distributed according to n a^-3.5. Similarly, for filled-in Tmin emission, corresponding EKBs could account for ~1e-3 MMoon of dust. Light scattered and/or thermally emitted by exo-Zodi discs will have profound implications for future spectroscopic missions designed to search for biomarkers in the atmospheres of Earth-like planets. The F-IR SED of aCenB is marginally consistent with the presence of a minimum temperature region in the upper atmosphere. We also show that an aCenA-like temperature minimum may result in an erroneous apprehension about the presence of dust around other stars. • [Abridged] Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby binary aCentauri have higher than solar metallicities, which is thought to promote giant planet formation. We aim to determine the level of emission from debris in the aCen system. Having already detected the temperature minimum, Tmin, of aCenA, we here attempt to do so also for the companion aCenB. Using the aCen stars as templates, we study possible effects Tmin may have on the detectability of unresolved dust discs around other stars. We use Herschel and APEX photometry to determine the stellar spectral energy distributions. In addition, we use APEX for spectral line mapping to study the complex background around aCen seen in the photometric images. Models of stellar atmospheres and discs are used to estimate the amount of debris around these stars. For solar-type stars, a fractional dust luminosity fd 2e-7 could account for SEDs that do not exhibit the Tmin-effect. Slight excesses at the 2.5 sigma level are observed at 24 mu for both stars, which, if interpreted to be due to dust, would correspond to fd (1-3)e-5. Dynamical disc modelling leads to rough mass estimates of the putative Zodi belts around the aCen stars, viz. <~4e-6 MMoon of 4 to 1000 mu size grains, distributed according to n a^-3.5. Similarly, for filled-in Tmin emission, corresponding EKBs could account for ~1e-3 MMoon of dust. Light scattered and/or thermally emitted by exo-Zodi discs will have profound implications for future spectroscopic missions designed to search for biomarkers in the atmospheres of Earth-like planets. The F-IR SED of aCenB is marginally consistent with the presence of a minimum temperature region in the upper atmosphere. We also show that an aCenA-like temperature minimum may result in an erroneous apprehension about the presence of dust around other stars. • ### The virtual observatory service TheoSSA: Establishing a database of synthetic stellar flux standards. I. NLTE spectral analysis of the DA-type white dwarf G 191-B2B(1308.6450) Aug. 29, 2013 astro-ph.SR H-rich, DA-type white dwarfs are particularly suited as primary standard stars for flux calibration. State-of-the-art NLTE models consider opacities of species up to trans-iron elements and provide reliable synthetic stellar-atmosphere spectra to compare with observation. We establish a database of theoretical spectra of stellar flux standards that are easily accessible via a web interface. In the framework of the Virtual Observatory, the German Astrophysical Virtual Observatory developed the registered service TheoSSA. It provides easy access to stellar spectral energy distributions (SEDs) and is intended to ingest SEDs calculated by any model-atmosphere code. In case of the DA white dwarf G 191-B2B, we demonstrate that the model reproduces not only its overall continuum shape but also the numerous metal lines exhibited in its ultraviolet spectrum. TheoSSA is in operation and contains presently a variety of SEDs for DA white dwarfs. It will be extended in the near future and can host SEDs of all primary and secondary flux standards. The spectral analysis of G 191-B2B has shown that our hydrostatic models reproduce the observations best at an effective temperature of 60000 +/- 2000K and a surface gravity of log g = 7.60 +/- 0.05. We newly identified Fe VI, Ni VI, and Zn IV lines. For the first time, we determined the photospheric zinc abundance with a logarithmic mass fraction of -4.89 (7.5 times solar). The abundances of He (upper limit), C, N, O, Al, Si, O, P, S, Fe, Ni, Ge, and Sn were precisely determined. Upper abundance limits of 10% solar were derived for Ti, Cr, Mn, and Co. The TheoSSA database of theoretical SEDs of stellar flux standards guarantees that the flux calibration of all astronomical data and cross-calibration between different instruments can be based on the same models and SEDs calculated with different model-atmosphere codes and are easy to compare. • ### High-Velocity Line Forming Regions in the Type Ia Supernova 2009ig(1302.3537) Aug. 29, 2013 astro-ph.CO, astro-ph.SR We report measurements and analysis of high-velocity (> 20,000 km/s) and photospheric absorption features in a series of spectra of the Type Ia supernova (SN) 2009ig obtained between -14d and +13d with respect to the time of maximum B-band luminosity. We identify lines of Si II, Si III, S II, Ca II and Fe II that produce both high-velocity (HVF) and photospheric-velocity (PVF) absorption features. SN 2009ig is unusual for the large number of lines with detectable HVF in the spectra, but the light-curve parameters correspond to a slightly overluminous but unexceptional SN Ia (M_B = -19.46 mag and Delta_m15 (B) = 0.90 mag). Similarly, the Si II lambda_6355 velocity at the time of B-max is greater than "normal" for a SN Ia, but it is not extreme (v_Si = 13,400 km/s). The -14d and -13d spectra clearly resolve HVF from Si II lambda_6355 as separate absorptions from a detached line forming region. At these very early phases, detached HVF are prevalent in all lines. From -12d to -6d, HVF and PVF are detected simultaneously, and the two line forming regions maintain a constant separation of about 8,000 km/s. After -6d all absorption features are PVF. The observations of SN 2009ig provide a complete picture of the transition from HVF to PVF. Most SN Ia show evidence for HVF from multiple lines in spectra obtained before -10d, and we compare the spectra of SN 2009ig to observations of other SN. We show that each of the unusual line profiles for Si II lambda_6355 found in early-time spectra of SN Ia correlate to a specific phase in a common development sequence from HVF to PVF. • ### Diffuse interstellar bands in M33(1307.4112) July 15, 2013 astro-ph.GA, astro-ph.SR We present the first sample of diffuse interstellar bands (DIBs) in the nearby galaxy M33. Studying DIBs in other galaxies allows the behaviour of the carriers to be examined under interstellar conditions which can be quite different from those of the Milky Way, and to determine which DIB properties can be used as reliable probes of extragalactic interstellar media. Multi-object spectroscopy of 43 stars in M33 has been performed using Keck/DEIMOS. The stellar spectral types were determined and combined with literature photometry to determine the M33 reddenings E(B-V)_M33. Equivalent widths or upper limits have been measured for the {\lambda}5780 DIB towards each star. DIBs were detected towards 20 stars, demonstrating that their carriers are abundant in M33. The relationship with reddening is found to be at the upper end of the range observed in the Milky Way. The line of sight towards one star has an unusually strong ratio of DIB equivalent width to E(B-V)_M33, and a total of seven DIBs were detected towards this star.
# Is there a relationship between vector spaces and fields/rings/groups? I understand from a comment under Vector Spaces and Groups that every vector space is a group, but not every group is a vector space. Specifically, I would like to know, can I make a statement like: "All fields are rings, and all rings are groups"? At this point in my studies, I see various lists of axioms, and I'm trying to see the relationship between them all. This all because I have a headache, so I went to lie down with a linear algebra book. It's not helping. Specifically, I would like to know, can I make a statement like: "All fields are rings, and all rings are groups"? That is all correct. A field satisfies all ring axioms plus some extra axioms, so a field is a ring. A ring is an Abelian group plus some more axioms, so each ring is a group. A vector space is also an Abelian group with some extra axioms relating it to a field. The field is an indispensable part of the definition of the vector space. If you define a vector space to be an Abelian group V which has multiplication defined with a field (or division ring ) $V\times F\to V$ satisfying some axioms, then you can replace F with another ring and do something similar, except that V is called a module over the ring rather than a vector space. In other words, rings and modules are a generalization of fields and vector spaces. • Thanks! Exactly what I was looking for. :D – MathAdam Sep 7 '15 at 18:02 • It strikes me as incorrect to say "A ring is an Abelian group plus some more axioms" since the signatures are incompatible; all rings are Abelian groups equipped with a multiplication operation (plus some more axioms) as well as they are monoids equipped with addition (plus some more axioms), but omitting this additional information seems problematic. – Milo Brandt Sep 8 '15 at 0:58 • Dear @MiloBrandt : Certainly "more axioms" can conceal anything, even things that change the signature. And certainly one can get at the same objects from multiple paths, but this one is most relevant to the user. I don't believe you are really seeing anything incorrect, but I wouldn't rule out a good case of "I wouldn't say it like that," and that's fine with me. Regards – rschwieb Sep 8 '15 at 3:22 Every vector space is over a field. And every field is a vector space over itself. So for instance $\Bbb R$ is a field. But we can also say that $\Bbb R$ is a vector space over the field $\Bbb R$. Likewise $\Bbb C$ can be considered a vector space over $\Bbb C$ (making it a one-dimensional vector space). Also $\Bbb C$ can be considered a vector space over $\Bbb R$ (making it a two-dimensional vector space). If we use a ring instead of a field as our scalars, then we get a module rather than a vector space. Then every module is over a ring. And every ring is a module over itself. • Also, every abelian group can be considered a module over $\mathbb Z$. – Paŭlo Ebermann Sep 7 '15 at 20:57 Yes, All Fields are rings, and all rings are groups. That said, it is perhaps worthwhile to add a few words of clarification. Among these three, fields, rings and groups, the groups have the simpler structure. Groups require only one operation among it's members, and it is this operation that needs to satisfy the group axioms. Rings, however, require two operations, sometimes called "addition" and "multiplication", but this is only a convention. These two operations have to cooperate with each other -- this is actually one of the ring axioms. You want to have something like the distributive law that connects addition and multiplication of numbers. This is possibly the reason why the two operations of a ring are often called "addition" and "multiplication". What might be confusing for beginners, is that with respect to one of these operations - traditionally the "addition", the ring-structure is required to be a group, while the second operation - the "multiplication", it is not required to induce a group, and indeed, one of the most important rings -- the ring of real valued $n\times n$ matrices, has two operations -- addition and multiplication -- and while with respect to the addition operations the matrices form a group, with respect to the multiplication operation they (after we've discarded the zero element) do not form a group, because some matrices do not have a multiplicative inverse. The ring of matrices has a multiplicative identity element, but this is not required by definition. There are rings without a multiplicative identity. So a ring is a group with an additional structure - that obtained by another operation. A field is a very special kind of ring. Not only do we require that we have two groups with respect to both operations, we also require them to be commutative. This is quite a restriction compared to rings. More precisely, a field is a triple $\{F,+\cdot\}$ such that $\{F,+\}$ is a commutative group and $\{F^*,\cdot\}$ is also a commutative group, where $F^*$ is obtained from $F$ by discarding the additive identity (normally denoted by $0$), such that the operations $+$ and $\cdot$ satisfy the distributive law. I would recommend to learn these terms hierarchically: start first with magmas, then magmas with special properties (semigroups), then monoids, then monoid with special properties (groups), then semirings, then rings, then rings with special properties (such as integral domains, factorial rings, fields, etc.). Consider at least one example for every type. Once you know about this, start with modules (over a ring), which includes the special case of vector spaces (which are by definition modules over a field) and of abelian groups (modules over the integers). You might want to understand the difference between a structure and a property.
# 50. Consider a random process $X(t)=\sqrt2 \s... 50. Consider a random process \(X(t)=\sqrt2 \sin(2\pi t+\phi)$, where the random phase ϕ is uniformly distributed in the interval [ 0,2π] .The auto-correlation E[X(t1)X(t2)] (D) cos(2π(t1-t2)) (C) sin(2π(t1+t2)) (B) sin(2π(t1-t2)) (A) cos(2π(t1+t2)) Hint: Explanation: Discuss the solution.
# Typesetting the square of a mathematical operator Assume I want to typeset the square of some mathematical operator A. Using \operatorname (amsmath package), there are basically two ways to do that: 1. $$\operatorname{A^{2}}$$ (i.e., the exponent is considered part of the operator name) 2. $$\operatorname{A}^{2}$$ (i.e., the exponent is not considered part of the operator name) According to my tests, the above two formulae are not equivalent, though. In fact, the first formula has a smaller height than the second one. Minimal example: \documentclass{article} \usepackage{amsmath} \newlength{\len} \begin{document} \begin{enumerate} \item \settoheight{\len}{$$\operatorname{A^{2}}$$} $$\operatorname{A^{2}}$$: height = \the\len \item \settoheight{\len}{$$\operatorname{A}^{2}$$} $$\operatorname{A}^{2}$$: height = \the\len \end{enumerate} \end{document} Can anybody explain what is at the bottom of my observation? - This issue is due to how TeX handles operators. An operator is boxed, so subscript placement is different than for normal characters. –  Philippe Goutet Aug 20 '12 at 20:06 The reason for the difference is that TeX typesets superscripts differently whether it follows a character or a box, as described in rule 18a of Appendix G of The TeXbook. As the macro \operatorname boxes its contents (because it calls \mathop which does), that's why \operatorname{A}^2 and \operatorname{A^2} differ (the first superscript concerns a box, whereas the second only the preceding A). You can easily see that an \operatorname and an \hbox behave similarly: \documentclass{article} \usepackage{graphicx} \usepackage{amsmath} \usepackage{xcolor} \begin{document} \begin{tabular}{ccc} \scalebox{5}{$\operatorname{A}^2$} & \scalebox{5}{$\hbox{A}^2$} & \scalebox{5}{$\operatorname{A^2}$} \\ \verb"$\operatorname{A}^2$" & \verb"$\hbox{A}^2$" & \verb"$\operatorname{A^2}$" \\ \end{tabular} \raisebox{1.22cm}[0pt]{\color{red}\rule{\textwidth}{0.4pt}} \end{document} Here are the technical details of the actual computations made by TeX in the present case: \documentclass[a4paper]{article} \usepackage{graphicx} \usepackage{xcolor} \usepackage{geometry} \begin{document} \setbox0=\hbox{$a$}% to initialize the maths fonts \begingroup \newdimen\h \newdimen\q \newdimen\boxedu \newdimen\unboxedu \newdimen\sigmafourteen \newdimen\sigmafive \q=\the\fontdimen18\scriptfont2 \sigmafourteen=\the\fontdimen14\textfont2 \sigmafive=\the\fontdimen5\textfont2 \noindent List of relevant font parameters and their values: \begin{quote} \begin{tabular}{lll} \texttt{x\_height} & $\sigma_5$ & \the\sigmafive \\ \texttt{sup2} & $\sigma_{14}$ & \the\sigmafourteen \\ \texttt{sup\_drop} & $q$ (it's $\sigma_{18}$ of superscript font) & \the\q \\ \end{tabular} \end{quote} Comparison of the amount the superscript is shifted up for a boxed and unboxed $A$: \begin{quote} \setbox0=\hbox{$A$} \h=\the\ht0 \def\maxof#1#2{% \ifdim#1>#2% #1% \else #2% \fi} \begin{tabular}{lll} & \tabularheading Boxed $A$ & \tabularheading Unboxed $A$ \\ \tabularheading height $h$ & \the\h & \the\h \\ \tabularheading base superscript shift $u_0$ & $h-q = \mathrm{\the\dimexpr\h-\q\relax}$ & 0pt \\ \tabularheading real shift $u = \max(u_0,\sigma_{14},\frac{1}{4}\sigma_5)$ & \boxedu=\dimexpr\h-\q\relax \boxedu=\maxof{\boxedu}{\sigmafourteen}% \global\boxedu=\maxof{\boxedu}{.25\sigmafive}% \the\boxedu & \unboxedu=0pt \unboxedu=\maxof{\unboxedu}{\sigmafourteen}% \global\unboxedu=\maxof{\unboxedu}{.25\sigmafive}% \the\unboxedu \end{tabular} \end{quote} Comparision of the calculations with the real typesetting: \begin{quote} \begin{tabular}{cc} \scalebox{5}{$\hbox{$A$}^2$\hbox{$A$\raise\boxedu\hbox{$\scriptstyle2$}}} & \scalebox{5}{$A^2$\hbox{$A$\raise\unboxedu\hbox{$\scriptstyle2$}}} \\ \tabularheading boxed $A$ & \tabularheading unboxed $A$ \\ \end{tabular} \raisebox{1.35cm}[0pt]{\color{blue}\rule{9.5cm}{0.4pt}} \end{quote} \endgroup \end{document} - I’ve done some further tests. \hbox{<base>}^{<exponent>} doesn’t behave the same as \mathop{<base>}^{<exponent>}. However, \operatorname internally uses something like \mathop{\kern0pt<base>}^{<exponent>}, and, in fact, this matches the behaviour of \hbox{<base>}^{<exponent>}. –  mhp Aug 21 '12 at 9:53 @mhp: do you have an example of \hbox{<base>}^{<exponent>} being different than \mathop{<base>}^{<exponent>}? Because you must be careful of what \mathop does that \hbox doesn't (it shifts single characters to the math axis) and of what \hbox does and \mathop doesn't (put the type in roman). This should normally account for the differences you saw. –  Philippe Goutet Aug 21 '12 at 10:06 Try \settoheight{\len}{$$\hbox{A}^{2}$$} vs. \settoheight{\len}{$$\mathrm{\mathop{A}^{2}}$$}. Moreover, you’ll see the same effect if you replace \mathop with \mathord. –  mhp Aug 21 '12 at 10:46 @mhp: nothing unexpected happening here. Measuring is not enough, you should see what formulas look like. The 2 is low in \mathrm{\mathop{A}^{2}} because the A is in fact below the baseline as you see if you type \mathrm{A^2\mathop{A}^{2}}. As for \mathord giving the same result as \mathop height-wise, that's normal as {A}, \mathord{A} and A are all the same thing and, in \mathrm{\mathop{A}^{2}}, the 2 is as low as it can be just as in A^2. The \kern0pt in \operatorname is here for only one reason: avoiding that \operatorname{A} go below baseline. –  Philippe Goutet Aug 21 '12 at 11:26 OK, so \mathop always boxes its content whereas \mathord boxes its content if there is more than one character, right? –  mhp Aug 21 '12 at 11:52 Here's a fairly detailed explanation of what goes on in the execution of an \operatorname instruction. Note that this explanation is simplified to the case of the use of this command without the * ("star") qualifier. (See amsopn.sty for the full details.) The \operatorname instruction (without the "star" qualifier) is set up as \DeclareRobustCommand{\operatorname}{{\qopname\newmcodes@ o}} where \qopname, in turn, is defined as \DeclareRobustCommand{\qopname}[3]{% \mathop{#1\kern\z@\operator@font#3}% \csname n#2limits@\endcsname}, \operator@font is given by \def\operator@font{\mathgroup\symoperators}, and \newmcodes@ is given -- inside a TeX group for which " has catcode 12 -- by \gdef\newmcodes@{\mathcode\'39\mathcode\*42\mathcode\."613A% \ifnum\mathcode\-=45 \else \mathchardef\std@minus\mathcode\-\relax \fi \mathcode\-45\mathcode\/47\mathcode\:"603A\relax} (Basically, the \newmcodes@ command modifies the meanings to the characters ' * . - / and : from their "regular" math-mode settings.) Finally, the command \z@ is equivalent to 0pt (zero length). Hence, executing the command \operatorname{xyz} is equivalent to executing {\qopname\newmcodes@ o xyz} which boils down to executing, after (i) recognizing that none of the special characters affected by the \newmodes@ command are involved in the current example, (ii) resolving the construct in the \csname ... \endcsname complex to \nolimits, and (iii) noting that \nolimits has no effects if we don't specify limits: {\mathop{\kern0pt \operator@font xyz} Therefore, $\operatorname{A}^2$ resolves to ${\mathop{\kern0pt \operator@font A}^2$ whereas $\operatorname{A^2}$ resolves to ${\mathop{\kern0pt \operator@font A^2}$ If the "squaring instruction" is inside the \mathop instruction, it appears that the height of the letter(s) that precede the superscript-2 do not affect the vertical positioning of the 2. E.g., check out the positions of the 2 glyph in $\mathop{\kern0pt \operator@font ln^2}$ $\mathop{\kern0pt \operator@font sin^2}$ $\mathop{\kern0pt \operator@font cos}^2$ They're all the same. Conversely, if the "squaring instruction" is not inside the \mathop instruction, what comes into play is the height of the entire box that contains the "name" part of the \operatorname instruction; if the "name" part contains letters with ascenders, the box's height increases, and this will affect the positioning of the superscript-2. E.g., for $\ln^2$, $\det^2$, and $\cos^2$, the superscript is at different heights because of the differences in the heights of the boxes that contain ln, sin, and cos, respectively. - @mhp: I've revised my answer thoroughly to track in all gory detail what exactly goes on in the execution of the \operatorname command. –  Mico Aug 20 '12 at 21:16 The placing of superscripts is not affected by the presence of absence of ascenders when applied to characters, the vertical position of the superscript x^2 and X^2 will be the same. On the other hand, when applied to boxes the height of the box is taken into account, so {xx}^2 and {XX}^2 are different. –  Khaled Hosny Aug 21 '12 at 4:55 @mhp: not always: {A}^2 gives the same result as A^2. But as soon as there is something else than one character inside the braces, it becomes boxed content, so the behavior is perfectly normal as all boxed content behave the same. –  Philippe Goutet Aug 21 '12 at 9:59 @KhaledHosny -- thanks for providing this clarification. I'll update my answer to incorporate the importance of thinking in terms of boxes. –  Mico Aug 21 '12 at 11:16 @Mico: If I understand you right the last two paragraphs of your answer are contradictory to the answer of Philippe Goutet. –  mhp Aug 21 '12 at 12:05 You got some great answers explaining the TeXnicalities (and thus answering your question). I'd like to point out that you should never use $$\operatorname{A^{2}}$$, and that you probably just want $$A^2$$: If you have some mathematical operator, then you can use the variable A to denote that operator. In that case you should just use A^2. Only for special (non-variable) operators one should use \operatorname, e.g. \operatorname{E} for the expected value. (In this example it happens that \operatorname{E}^{2} doesn't really make sense, but you'd always put the square outside the \operatorname.) - Of course, you’re right. But, as a matter of fact, \operatorname is not only used for real operators, but also for functions such as sin, log etc. –  mhp Aug 22 '12 at 14:15 By the way, I think you mean ‘you should never use $$\operatorname{A^{2}}$$’, right? –  mhp Aug 22 '12 at 14:19 @mhp: Oh my god, I wrote it the wrong way - thanks for pointing that out. Corrected! –  Hendrik Vogt Aug 22 '12 at 15:45 @mhp: Of course, \operatorname` is used for special (non-variable) functions, too. I thought I didn't need to mention that - an operator is a just a function, after all. –  Hendrik Vogt Aug 22 '12 at 15:47
#### Similar Solved Questions ##### Flint Corporation completed its first year of operations on December 31, 2022. Its initial income statement... Flint Corporation completed its first year of operations on December 31, 2022. Its initial income statement showed that Flint Corporation had sales revenue of $212,058 and operatin$88,893. Accounts receivable and accounts payable at year-end were $64,260 and$24,633, respectively. Assume that accou... ... ##### 4.2 (a) Design the reinforcement for the beam shown in the figure for Problem 4.2, if... 4.2 (a) Design the reinforcement for the beam shown in the figure for Problem 4.2, if the dead load moment is 90 ft-kips and the live load moment is 140 ft-kips. (b) Determine the maximum tension reinforce- ment A, permitted for this beam by ACI- 9.3.3.1 and for a tension-controlled section. - 48... ##### Apply the Midpoint and Trapezoid Rules to the following integral. Make table showing the approximations and errors for n =4 integral is given for compe uting the error:16,and 32. The exact value of theJ 3sin2x dx= 2Complete the table below for the Midpoint RuleMidpoint RuleAbsolute Error(Round to four decimal places as needed ) Apply the Midpoint and Trapezoid Rules to the following integral. Make table showing the approximations and errors for n =4 integral is given for compe uting the error: 16,and 32. The exact value of the J 3sin2x dx= 2 Complete the table below for the Midpoint Rule Midpoint Rule Absolute Error (Round... ##### Question 162 ptsWhich oneis true for next structure?NH,It is the most activating group on benzeneNH3 group will initiate electrophilic substitution reaction:The ammonium group should be off for being ammonia easily: It will react with acid derivatives like benzoyl chloride easily: It has the most deactivating group on benzene: Question 16 2 pts Which oneis true for next structure? NH, It is the most activating group on benzene NH3 group will initiate electrophilic substitution reaction: The ammonium group should be off for being ammonia easily: It will react with acid derivatives like benzoyl chloride easily: It has the m... ##### {20 points) Find the general solution to Az = 6 if A =and 6 = (-7,2,4). Use matrices inYour wOrk Whenever possible. {20 points) Find the general solution to Az = 6 if A = and 6 = (-7,2,4). Use matrices in Your wOrk Whenever possible.... ##### Point) Write the given second order equation as its equivalent system of first order equationsu" 5.5u' + 2u = -2.5 sin(3t) ,u(1) = -2.5u' (1) = -1Use to represent the "velocity function", i.e. v = &' (t). Use and u for the two functions, rather than u(t) and - v(t). (The latter confuses webwork Functions like sin(t) are ok)Now write the system using matrices:dt] [:- [and the initial value for the vector valued function is:u(1) point) Write the given second order equation as its equivalent system of first order equations u" 5.5u' + 2u = -2.5 sin(3t) , u(1) = -2.5 u' (1) = -1 Use to represent the "velocity function", i.e. v = &' (t). Use and u for the two functions, rather than u(t) and - v... ##### A studeni Fn the following nactiotPC Istg) FPCls(g) (1,5)Whca she introduxred 204molzs of Pc Istg) mnto D0 Iter conlaiz'T , she found equllibrittm Concahinnan of Clze) 10 182*10Calculate the equilibrium conslantEulee A studeni Fn the following nactiot PC Istg) FPCls(g) (1,5) Whca she introduxred 204molzs of Pc Istg) mnto D0 Iter conlaiz'T , she found equllibrittm Concahinnan of Clze) 10 182*10 Calculate the equilibrium conslant Eulee... ##### You are presented with the following unadjusted trial balance and additional information for Berkeley Acronautics, Inc.... You are presented with the following unadjusted trial balance and additional information for Berkeley Acronautics, Inc. for December 31, 2019. The company adjusts the books only at end of year. Accounts payable Accounts receivable Cash Common stock Dividends Equipment Government contract revenue Pre... ##### Thank you! = v 552 SED AaBbCcI AaBbci AaBbci AaBbCcl 1 Normal 1 No Spac... Heading... Thank you! = v 552 SED AaBbCcI AaBbci AaBbci AaBbCcl 1 Normal 1 No Spac... Heading 1 Heading 2 Paragraph Styles 1. Identify the types of intermolecular forces that would be present in a pure sample of each substance for the following pairs of molecules. It may help to draw Lewis structures in som... ##### Builder Products, Inc., uses the weighted average method in its process costing system. It manufactures a... Builder Products, Inc., uses the weighted average method in its process costing system. It manufactures a caulking compound that goes through three processing stages prior to completion. Information on work in the first department, Cooking, is given below for May: 86,000 510,000 Production data Poun... ##### Futetiots anc GiaDotnuin of EquatiDownloads/Domain%200f%20Equations%2OClassworkpdf[Page vierRead aloudDrawDetermlna the domaln ol the followIng functlont Wy =SVzr + 1 (B) f(t) = Zr+7(C)y = 34*+2+2X 1(Dly-*(E) f() = 5(Fy Futetiots anc Gia Dotnuin of Equati Downloads/Domain%200f%20Equations%2OClassworkpdf [Page vier Read aloud Draw Determlna the domaln ol the followIng functlont Wy =SVzr + 1 (B) f(t) = Zr+7 (C)y = 34*+2+2X 1 (Dly-* (E) f() = 5 (Fy... ##### Which of the following integrals represents the area of the region that lies inside the cardioid r = 1 + cos 0, outside the circle r = 3 cos 0, and above the X-axis:1+cosr dr d0T/3 3 coS1+cos 0 3 cos 0 r dr d0 _ J r dr d0 T/3/3T/21+c0s 0 r dr d0 3 cos 0"/3T/23 cos 0 1+cos 0 r dr d0 _ Ips = r dr d0 T/3 T/2 3 cos 0 1+cos 0 r dr d0 + Kr r dr d0 "/3 Which of the following integrals represents the area of the region that lies inside the cardioid r = 1 + cos 0, outside the circle r = 3 cos 0, and above the X-axis: 1+cos r dr d0 T/3 3 coS 1+cos 0 3 cos 0 r dr d0 _ J r dr d0 T/3 /3 T/2 1+c0s 0 r dr d0 3 cos 0 "/3 T/2 3 cos 0 1+cos 0 r dr d0 _ ... ##### Graphing Polynomials Factor the polynomial and use the factored form to find the zeros. Then sketch the graph.$P(x)=x^{4}-3 x^{2}-4$ Graphing Polynomials Factor the polynomial and use the factored form to find the zeros. Then sketch the graph. $P(x)=x^{4}-3 x^{2}-4$... ##### Eian-ItEe FEADnor;MoninApnkcH 960 HE duniid 29 Ghcinc 9c0 Ae moluc 960 pra fable ofselect anulno acn28Peptide: ; Using values from the pka table above; calculale the isockectric Peptide: Pro-Aia-Glu-Gly: Point Show all work for full credit (pI) for the (8 points)For the Peptide. Pro-Ala-Glu-Gly. find the PH-I . PH-? PHEIO. (6 points)chare each of the follou ing PH values: Eian-It Ee FEADnor; Monin ApnkcH 960 HE duniid 29 Ghcinc 9c0 Ae moluc 960 pra fable ofselect anulno acn 28 Peptide: ; Using values from the pka table above; calculale the isockectric Peptide: Pro-Aia-Glu-Gly: Point Show all work for full credit (pI) for the (8 points) For the Peptide. Pro-Ala-Glu-Gl... ##### Instructions: Please read the following prompt and craft a response to all three questions. I want... Instructions: Please read the following prompt and craft a response to all three questions. I want to see you follow the format in the guide a posted. You need facts, issues, authority (at least one primary and one secondary), analysis, and a conclusion. If you feel that you need more information to... ##### Qnestom points } Evaluaie the double integralLi d &slect une:Nobe 0f ten Qnestom points } Evaluaie the double integral Li d & slect une: Nobe 0f ten... ##### Practlce For each function, use the techniques shown in this section to sketch the graph of the function. Use f(r) to find the domain, intercepts, and asymptotes. Use f'(1) to find the critical numbers, intervals of increase or decrease, and local extrema_ Use f"(1) to find concavity and points of inflection_ f() 1)} 2. f() (x 4} 3. f() = Vx + 5 4.f() Var + 3)2 15 5. f6) 6. f 6) = (21" 4)-2 T + 7 . f6) = (+4)? 8. f (x) 62 9)2 9. f(1) = 161+2 12) 10. f() =xV4 11. f() 2)2 12. f() 11 Practlce For each function, use the techniques shown in this section to sketch the graph of the function. Use f(r) to find the domain, intercepts, and asymptotes. Use f'(1) to find the critical numbers, intervals of increase or decrease, and local extrema_ Use f"(1) to find concavity and p... ##### O-1 points CJ10 18.P A 2.2 pIC point charge is placed in an external üniform electric... O-1 points CJ10 18.P A 2.2 pIC point charge is placed in an external üniform electric field that has a magnitude of 1.5 x 101 N/C. At what distance from the charge is the net electric fred zero? Section 18.8 0 -12 points CJ10 18.P 043 Interactive Solution 18.73 A smalirobject has a mass of 3.1 ... ##### Consider the data the worksheetlabeled Problemin the file Test DataGenerate modelforfunction ofxIsthis mode useful? Justify Your conclusion (based on i) R? adjusted, ii) Hypothesis test for model coefficient;, iii) overall model adequacy test and iv) regression assumptions_needed, modify model as appropriate and generate the new regression model. Consider the data the worksheetlabeled Problem in the file Test Data Generate modelfor function ofx Isthis mode useful? Justify Your conclusion (based on i) R? adjusted, ii) Hypothesis test for model coefficient;, iii) overall model adequacy test and iv) regression assumptions_ needed, modify model ... ##### The following information is from the materials requisitions and time tickets for Job 9-1005 completed by... The following information is from the materials requisitions and time tickets for Job 9-1005 completed by Great Bay Boats. The requisitions are identified by code numbers starting with the letter Q, and the time tickets start with W. At the start of the year, management estimated that overhead cost ... ##### Financial risk is the risk associated with: O ownership in a corporation O equity financing O... Financial risk is the risk associated with: O ownership in a corporation O equity financing O financing activities O debt financing... ##### 3 For the 2x+1 following function: f(x) = ;if x +4 , find: X4 [7; ifx=4 a) 5(4)Ib)f(-2) 3 For the 2x+1 following function: f(x) = ;if x +4 , find: X4 [7; ifx=4 a) 5(4) Ib)f(-2)... ##### Huawei has spent ¥1.5B in R&D and is ready to launch the first foldable cellphone Mate... Huawei has spent ¥1.5B in R&D and is ready to launch the first foldable cellphone Mate X. However, they are not sure about whether consumers will like their new phones. They can choose to launch the product at year 0 or year 1. The costs of production will be the same whether they enter the ... ##### Which of the following types of “risk” are encountered in financial markets? Interest rate risk: Higher... Which of the following types of “risk” are encountered in financial markets? Interest rate risk: Higher interest rate risks impair the value of fixed income securities (such as bonds). Credit risk: Risk of possible default, where the borrower cannot make timely interest payments and/... ##### Quickly answer all parts of the question please Problem 3 (4 pts.). A bathyscaphe (a small... Quickly answer all parts of the question please Problem 3 (4 pts.). A bathyscaphe (a small free-diving self-propelled deep-sea submersible), is a sphere of radius R- 1.5 m located 873.5 m below the sea level. (a) What is the pressure exerted on its surface (here you can neglect the size of the b... ##### Question 6 Yotyet otrtroutllonFhg QuestlanDrosophila the genes for wing size (v*=large, V-vestigial) and bristle lenguh (0 Felang, bzshort) asson indepandenty with simple dominance, The following results were obtained from Cross:302 large, long98 large, shon297 vestigtal, Iong101 vestigial , shortWhat were the genotypes of the parents?Select one: vv; btb* W; bb b.vtv btb w; b*b cvv btb vv bb dvtv; bb Wib b Question 6 Yotyet otrtroutllon Fhg Questlan Drosophila the genes for wing size (v*=large, V-vestigial) and bristle lenguh (0 Felang, bzshort) asson indepandenty with simple dominance, The following results were obtained from Cross: 302 large, long 98 large, shon 297 vestigtal, Iong 101 vestigial , ...
# FFT for expanded form of equation multiplication I know how to use the FFT for multiplying two equations in $O(n\,log\,n)$ time, but is there a way to use FFT to compute the expanded equation before simplifying? For example, if you are multiplying $A(x) = 1 + 2x + 6x^2$ and $B(x) = 4 + 5x + 3x^2$ such that you get $C(x) = A(x) \cdot B(x) = 4 + 5x + 3x^2 + 8x + 10x^2 + 6x^3 + 24x^2 + 30x^3 + 18x^4$ without going directly to the simplified answer? Furthermore, is it possible to use FFT to do this expanded form multiplication in $O(n\,log\,n)$ time? If so, can you show me how to apply FFT to this scenario? • Why would you want to do this? – Yuval Filmus Sep 22 '13 at 4:23 The trivial algorithm that multiplies every monomial in $A$ by every monomial in $B$ takes time $O(|A| \cdot |B|)$ (where $|A|$ is the number of monomials in $A$ or $\deg A + 1$, depending on the representation), which is the same order of magnitude as the size of the output, and so optimal. You only need FFT if you want to actually compute the product $AB$. In particular, there is no way to compute your function in time $O(n\log n)$, simply because the length of the output is $\Omega(n^2)$.
Alacarte knows to be silly sometimes, especially if you typed something after the /usr/bin/google-chrome-stable in the command field. I would suggest creating shortcut without any additional switches and then editing it "by hand". File should be located in ~/.local/share/applications/something.desktop. Open up that file with your favorite text editor and edit line that begins with: Exec=/usr/bin/google...
For in GOD we live, and move, and have our being. - Acts 17:28 The Joy of a Teacher is the Success of his Students. - Samuel Chukwuemeka Solved Examples - Translating Word Problems to Algebraic Expressions For ACT Students The ACT is a timed exam...$60$ questions for $60$ minutes This implies that you have to solve each question in one minute. Some questions will typically take less than a minute a solve. Some questions will typically take more than a minute to solve. The goal is to maximize your time. You use the time saved on those questions you solved in less than a minute, to solve the questions that will take more than a minute. So, you should try to solve each question correctly and timely. So, it is not just solving a question correctly, but solving it correctly on time. Please ensure you attempt all ACT questions. There is no "negative" penalty for any wrong answer. For JAMB and CMAT Students Calculators are not allowed. So, the questions are solved in a way that does not require a calculator. Translate each word problem from English to Math. Use appropriate variables as applicable. Do not solve. (1.) ACT Jane and Margaret moved to Newcity at the same time several years ago and have lived there ever since. Jane has lived there $\dfrac{1}{2}$ of her life, while Margaret has lived there $\dfrac{3}{5}$ of her life. If $j$ represents Jane's present age, which of the following expressions represents Margaret's present age? $F.\:\: \dfrac{3j}{10} \\[5ex] G.\:\: \dfrac{j}{2} \\[5ex] H.\:\: \dfrac{5j}{6} \\[5ex] J.\:\: \dfrac{6j}{5} \\[5ex] K.\:\: 2j \\[3ex]$ (2.) CSEC Write the following statement as an algebraic expression. The sum of a number and its multiplicative inverse is five times the number. Let the number be $p$ Multiplicative inverse of $p = \dfrac{1}{p}$ $p + \dfrac{1}{p} = 5p$ (3.) ACT Which of the following mathematical expressions is equivalent to the verbal expression "A number, $x$, squared is $39$ more than the product of $10$ and $x$"? $F\:\: 2x = 39 + 10x \\[3ex] G.\:\: 2x = 39x + 10x \\[3ex] H.\:\: x^2 = 39 - 10x \\[3ex] J.\:\: x^2 = 39 + x^{10} \\[3ex] K.\:\: x^2 = 39 + 10x \\[3ex]$ A number, $x$ means $x$ squared means $x^2$ is means $=$ product of $10$ and $x$ means $10 * x = 10x$ $39$ more than the product of $10$ and $x$ means $39 + 10x$ A number, $x$, squared is $39$ more than the product of $10$ and $x$ means $x^2 = 39 + 10x$ (4.) Nahum bought $y$ children's admission tickets for $\$2$each and$x$adult's admission tickets for$\$7$ each. Write an algebraic expression for the total amount spent by Nahum. $y$ children's admission tickets for $\$2$each =$y * 2 = 2yx$adult's admission tickets for$\$7$ each = $x * 7 = 7x$ $Total = 2y + 7x$ (5.) $\dfrac{2x + 3}{3} + \dfrac{x - 4}{4} \\[5ex] \dfrac{4(2x + 3)}{12} + \dfrac{3(x - 4)}{12} \\[5ex] \dfrac{8x + 12}{12} + \dfrac{3x - 12}{12} \\[5ex] \dfrac{8x + 12 + (3x - 12)}{12} \\[5ex] \dfrac{8x + 12 + 3x - 12}{12} \\[5ex] \dfrac{11x}{12}$ (6.) Samson's debt is seven less than half of David's debt. If $d$ represents David's debt, write an expression for Samson's debt. $David's\:\:debt = d \\[3ex] Half\:\:of\:\:David's\:\:debt = \dfrac{1}{2} * d = \dfrac{d}{2} \\[5ex] Seven\:\:less\:\:than\:\: \dfrac{d}{2} = \dfrac{d}{2} - 7 \\[5ex] Samson's\:\:debt = \dfrac{d}{2} - 7$ (7.) WASSCE Express $\dfrac{2}{x + 3} - \dfrac{1}{x - 2}$ as a simple fraction. $A.\:\: \dfrac{x - 7}{x^2 + x - 6} \\[5ex] B.\:\: \dfrac{x - 1}{x^2 + x - 6} \\[5ex] C.\:\: \dfrac{x - 2}{x^2 + x - 6} \\[5ex] D.\:\: \dfrac{x + 7}{x^2 + x - 6} \\[5ex]$ $\dfrac{2}{x + 3} - \dfrac{1}{x - 2} \\[5ex] \dfrac{2(x - 2)}{(x + 3)(x - 2)} - \dfrac{1(x + 3)}{(x + 3)(x - 2)} \\[5ex] \dfrac{2(x - 2) - 1(x + 3)}{(x + 3)(x - 2)} \\[5ex] \dfrac{2x - 4 - x - 3}{x^2 - 2x + 3x - 6} \\[5ex] \dfrac{x - 7}{x^2 + x - 6}$ (8.) CSEC Express as a single fraction: $\dfrac{3p}{2} + \dfrac{q}{p}$ $\dfrac{3p}{2} + \dfrac{q}{p} \\[5ex] \dfrac{3p(p)}{2p} + \dfrac{q(2)}{2p} \\[5ex] \dfrac{3p^2}{2p} + \dfrac{2q}{2p} \\[5ex] \dfrac{3p^2 + 2q}{2p}$ (9.) $\sqrt{2p + 6} - \sqrt{7 - 2p} = 1$ $\sqrt{2p + 6} - \sqrt{7 - 2p} = 1$ To find the domain; $2p + 6 \ge 0 \\[3ex] 2p \ge 0 - 6 \\[3ex] 2p \ge -6 \\[3ex] p \ge -\dfrac{6}{2} \\[5ex] p \ge -3 \\[3ex] Also: \\[3ex] 7 - 2p \ge 0 \\[3ex] 7 \ge 2p \\[3ex] 2p \le 7 \\[3ex] p \le \dfrac{7}{2} \\[5ex] Combine\:\: both\:\: (who\:\: satisfies\:\: both) \\[3ex] p \ge -3 \\[3ex] -3 \le p \\[3ex] p \le \dfrac{7}{2} \\[5ex] -3 \le p \le \dfrac{7}{2} \\[5ex]$ The values excluded from the domain are all real numbers less than $-3$ and all real numbers greater than $\dfrac{7}{2}$ $D = \left[-3, \dfrac{7}{2}\right] \\[5ex] \sqrt{2p + 6} - \sqrt{7 - 2p} = 1 \\[3ex] \sqrt{2p + 6} = 1 + \sqrt{7 - 2p} \\[3ex] Square\:\: both\:\: sides \\[3ex] \left(\sqrt{2p + 6}\right)^2 = \left(1 + \sqrt{7 - 2p}\right)^2 \\[3ex] 2p + 6 = (1 + \sqrt{7 - 2p})(1 + \sqrt{7 - 2p}) \\[3ex] (1 + \sqrt{7 - 2p})(1 + \sqrt{7 - 2p}) \\[3ex] (1)(1) = 1 \\[3ex] (1)(\sqrt{7 - 2p}) = \sqrt{7 - 2p} \\[3ex] \sqrt{7 - 2p}(1) = \sqrt{7 - 2p} \\[3ex] (\sqrt{7 - 2p})(\sqrt{7 - 2p}) = \left(\sqrt{7 - 2p}\right)^2 = 7 - 2p \\[3ex] \implies 2p + 6 = 1 + \sqrt{7 - 2p} + \sqrt{7 - 2p} + (7 - 2p) \\[3ex] 2p + 6 = 1 + 2\sqrt{7 - 2p} + 7 - 2p \\[3ex] 2p + 6 = 2\sqrt{7 - 2p} - 2p + 8 \\[3ex] 2\sqrt{7 - 2p} - 2p + 8 = 2p + 6 \\[3ex] 2\sqrt{7 - 2p} = 2p + 6 + 2p - 8 \\[3ex] 2\sqrt{7 - 2p} = 4p - 2 \\[3ex] 2\sqrt{7 - 2p} = 2(2p - 1) \\[3ex] Divide \:\:both\:\: sides\:\: 2 \\[3ex] \dfrac{2\sqrt{7 - 2p}}{2} = \dfrac{2(2p - 1)}{2} \\[5ex] \sqrt{7 - 2p} = 2p - 1 \\[3ex] Square\:\: both\:\: sides\:\: again \\[3ex] \left(\sqrt{7 - 2p}\right)^2 = (2p - 1)^2 \\[3ex] 7 - 2p = (2p - 1)(2p - 1) \\[3ex] 7 - 2p = 4p^2 - 2p - 2p + 1 \\[3ex] 7 - 2p = 4p^2 - 4p + 1 \\[3ex] 0 = 4p^2 - 4p + 1 - 7 + 2p \\[3ex] 0 = 4p^2 - 2p - 6 \\[3ex] 4p^2 - 2p - 6 = 0 \\[3ex] 2(2p^2 - p - 3) = 0 \\[3ex] Divide \:\:both\:\: sides\:\: 2 \\[3ex] \dfrac{2(2p^2 - p - 3)}{2} = \dfrac{0}{2} \\[3ex] 2p^2 - p - 3 = 0 \\[3ex] 2p^2 + 2p - 3p - 3 = 0 \\[3ex] 2p(p + 1) - 3(p + 1) = 0 \\[3ex] p + 1 = 0 \:\:OR\:\: 2p - 3 = 0 \\[3ex] p = -1 \:\:OR\:\: 2p = 3 \\[3ex] p = -1 \:\:OR\:\: p = \dfrac{3}{2} \\[5ex]$ Check $\underline{LHS} \\[3ex] \sqrt{2p + 6} - \sqrt{7 - 2p} \\[3ex] p = -1 \\[5ex] \sqrt{2p + 6} - \sqrt{7 - 2p} \\[3ex] \sqrt{2(-1) + 6} - \sqrt{7 - 2(-1)} \\[3ex] = \sqrt{-2 + 6} - \sqrt{7 + 2} \\[3ex] = \sqrt{4} - \sqrt{9} \\[3ex] = 2 - 3 \\[3ex] = -1 \\[3ex]$ $p = -1$ is NOT a solution. $p = -1$ is an extraneous solution. $\sqrt{2p + 6} - \sqrt{7 - 2p} \\[3ex] p = \dfrac{3}{2} \\[5ex] \sqrt{2\left(\dfrac{3}{2}\right) + 6} - \sqrt{7 - 2\left(\dfrac{3}{2}\right)} \\[5ex] \sqrt{3 + 6} - \sqrt{7 - 3} \\[3ex] = \sqrt{9} - \sqrt{4} \\[3ex] = 3 - 2 \\[3ex] = 1 \\[3ex]$ $\color{black}{p = \dfrac{3}{2}}$ is a solution. $\underline{RHS} \\[3ex] 1$ (10.) $\sqrt{p + 2} - 6 = 4$ $\sqrt{p + 2} - 6 = 4$ To find the domain; $p + 2 \ge 0 \\[3ex] p \ge -2 \\[3ex]$ The values excluded from the domain are all real numbers less than $-2$ $D = [-2, \infty) \\[3ex] \sqrt{p + 2} - 6 = 4 \\[3ex] \sqrt{p + 2} = 4 + 6 \\[3ex] \sqrt{p + 2} = 10 \\[3ex] Square\:\: both\:\: sides \\[3ex] (\sqrt{p + 2})^2 = 10^2 \\[3ex] p + 2 = 100 \\[3ex] p = 100 - 2 \\[3ex] p = 98 \\[3ex]$ Check $\underline{LHS} \\[3ex] \sqrt{p + 2} - 6 \\[3ex] p = 98 \\[3ex] \sqrt{98 + 2} - 6 \\[3ex] = \sqrt{100} - 6 \\[3ex] = 10 - 6 \\[3ex] = 4 \\[3ex]$ $\color{black}{p = 98}$ is a solution $\underline{RHS} \\[3ex] 4$
# Obtaining an acyclic graph by removing edges using an algorithm that decides ACYCLIC i don't understand the following: If there's an algorithm that can decide ACYCLIC in Polynomial time, then there's an algorithm who returns a set of k edges, so that the graph obtained by deleting the k edges is without cycles - in polynomial time. The algorithm should get a directed graph and a natural k as an input, and output, if there are k edges as needed, a list of k edges, so that the graph obtained from erasing those k edges is cycles. if there are no such k edges, it simply outputs "no". Question:my question in addition to the answer already given is this part: " then there's an algorithm who returns a set of k edges, so that the graph obtained by deleting the k edges is without cycles - in polynomial time." - what can be this algorithm? how to do it using a turing machine? Problematic part: I can only use an algorithm that decides ACYCLIC, but it is forbidden to use any other NP-Complete algorithms, and it's running time must be polynomial in regards to its input size. My attempt: well, to check/decide if a directed graph is ACYCLIC or not, we'll visit it topologically using DFS, then using a stack, we'll traverse edges to see if any edge in the digraph leads back to an edge already visited. if already visited - there's a cycle, if not - there's no cycle. The algorithm: on an input of a directed graph, to check ACYCLIC: 1)finding an vertex that has only outgoing nodes - if such node doesn't exist - return "graph has cycles" 2)on that node, run DFS and traverse the digraph; add each edge found to a stack. if a vertex is shown twice - return "graph has cycles". 3)if no cycles found, accept. But, I am not sure how to do it in regards to the algorithm required in the problem(first two paragraphs of the questions - basically, returning a set of k edges, so that by removing them, the graph will be cycles. would really appreciate knowing how to do it. thank you very much • Please edit the question to provide a definition of the language ACYCLIC. Also, what is your question? I don't see any question in your post. What's a circle? Do you mean cycle? Please edit accordingly. – D.W. Feb 11 '20 at 20:46 • thank you very much for your comment. edited the question. in addition to the great answer already given i was wondering - what can be the algorithm to obtain the following: "The algorithm should get a directed graph and a natural k as an input, and output, if there are k edges as needed, a list of k edges, so that the graph obtained from erasing those k edges is cycles. if there are no such k edges, it simply outputs "no"."? – pseudoturing Feb 12 '20 at 21:04 Disclaimer This solution assumes that the language $$\text{Acyclic}$$ is the language that contains exactly all acyclic directed graphs. It is impossible to achieve this in polynomial time unless $$\operatorname{P}=\operatorname{NP}$$. The reason is that the problem you have to solve is NP-hard. It is called the directed feedback arc set problem. It is one of Karp's 21 hard problems. On the other hand, finding whether a graph is acyclic can be done with any graph traversal method in polynomial (actually linear) time. Hence, $$\operatorname{P}^{\text{Acyclic}} = \operatorname P$$. Hence, if you solved the task you are given in polynomial time you would have proven $$\operatorname{P} = \operatorname{NP}$$. • Well if exponential running-time is okay for you, you can try removing all sets of $k$ edges and checking if the resulting graph has cycles in it. The running time in this case will be something like $O(n^k \cdot \operatorname{poly}(n))$ – narek Bojikian Feb 12 '20 at 22:38
# Is there a punctuation mark that does not take up any space? I'm typesetting a text from a 17th century Chinese woodblock print. The original text is printed vertically, and I'm printing it horizontally. The text has the same number of Chinese characters per line. It also has punctuation marks, which are in the lower right corner of every character. These marks take up no space, so that although there are an uneven number of punctuation marks in every line, the lines remain of the same length. However, if I use the CJK punctuation mark 。 for punctuation, that counts as one character, creating lines of uneven length. Is it possible to create a punctuation mark to be placed in subscript in the lower right corner of a Chinese character, which will not take up any space on the line, keeping the lines of uniform length? • What about \rlap{${}_.$}, or whatever you use for punctuation. – Werner Dec 4 '13 at 3:07 • I don't quite understand. Punctuation goes where . is, and the character that precedes the punctuation mark goes where _ is? – Mårten Dec 4 '13 at 3:25 • How do you support Chinese chars in your LaTeX document? XeTeX with the package xeCJK or (pdf)TeX with the package CJK? In addition, could you draw a diagrammatic sketch that shows where should a certain punctuation mark be placed, since I haven't saw any Chinese ancient writings place its punctuation marks as subscript, although I am a Chinese. – Ch'en Meng Dec 4 '13 at 3:27 • BTW, if you want to typeset Chinese chars vertically, have a look at this answer. – Ch'en Meng Dec 4 '13 at 3:29 • Sure. I use xeCJK. An example of the punctuation is seen here: db.tt/3QPRq5vW – Mårten Dec 4 '13 at 3:34 Here is an answer inspired by [email protected]. You can set a punctuation mark as \active, and then justify its position. Code: %!TEX program = xelatex \documentclass{article} \usepackage{xeCJK} \setCJKmainfont{SimSun} \makeatletter \begingroup \catcode\,=\active \@firstofone{\endgroup\protected\def,}{% \hskip -.4ex\rlap{\raise -.9 ex \hbox{,}}\hskip .4ex\ignorespaces} \makeatother \def\ActivePunct{\catcode\,=\active} \def\InactivePunct{\catcode\,= 12 } \begin{document} \ActivePunct \InactivePunct \end{document} And the output: You can amend any other punctuation marks as your wish. • Could not break a line when the specific punctuation mark is at the end of line; • Could be out of alignment when the specific punctuation mark is at the end of line. It's NOT perfect, and use it carefully. • There can be other solutions without changing the catcodes for this particular problem. BTW, 句读只有、和。 – Leo Liu Dec 6 '13 at 9:31 • @LeoLiu I wonder if I should communicate with you in Chinese.. Could you give an alternative which is robuster and more convenient than my answer, and I will be glad to see that (hope the OP will also be). To your 'BTW', you're right. :) – Ch'en Meng Dec 6 '13 at 9:43 This may be simpler: \documentclass{article} \usepackage{xeCJK} \setCJKmainfont{SimSun} \punctstyle{plain} \def\CJKpunctsymbol#1{\raise-1ex\hbox to 0pt{\kern-.1em#1}} \begin{document} \end{document} `
## Monday, December 10, 2018 ### More queens problems (2) In [1], I discussed the Minimum Dominating Set of Queens Problem. Here is a variant: Given an $$n \times n$$ chess board, place $$n$$ queens on the board such that the number of squares not under attack is maximized. For an $$n=8$$ problem, the maximum number of non-dominated squares is 11. An example is: 11 white pawns are not under attack We base the model on the one in [1]. First we define two sets of binary variables: \begin{align} & x_{i,j} = \begin{cases} 1 & \text{if square $$(i,j)$$ is occupied by a queen}\\ 0 & \text{otherwise}\end{cases} \\ & y_{i,j} = \begin{cases} 1 & \text{if square $$(i,j)$$ is under attack} \\ 0 & \text{otherwise}\end{cases}\end{align} The model can look like: Maximum Non-Dominated Squares Problem \begin{align}\max& \>\color{DarkRed}z=\sum_{i,j} \left(1-\color{DarkRed} y_{i,j}\right)\\ & \sum_{j'} \color{DarkRed}x_{i,j'} + \sum_{i'} \color{DarkRed}x_{i',j} + \sum_{i',j'|i'-j'=i-j} \color{DarkRed}x_{i',j'} +\sum_{i',j'|i'+j'=i+j} \color{DarkRed}x_{i',j'}-3 \color{DarkRed} x_{i,j} \ge \color{DarkBlue}M \cdot \color{DarkRed}y_{i,j} && \forall i,j \\ & \sum_{i,j} \color{DarkRed} x_{i,j} = n \\& \color{DarkRed}x_{i,j},\color{DarkRed}y_{i,j}\in\{0,1\} \end{align} The funny term $$-3x_{i,j}$$ is to compensate for double counting occurrences of $$x_{i,j}$$ in the previous terms [1]. We need to find a reasonable value for $$M$$. An upper bound is $$M=n$$. For $$n=8$$, there are 48 different optimal solutions. The problem $$n=9$$ has one structural optimal solution (with 18 unattacked positions), and 3 variants by symmetry: Is there a good way to find only structural solutions? One way is to use your own cuts as in [4], If we want to use solution pool, we need to invent a constraint that forbids some symmetry (or have an objective that favors some of symmetric versions). #### References 1. More queens problems, https://yetanothermathprogrammingconsultant.blogspot.com/2018/12/more-queens-problems.html 2. Bernard Lemaire, Pavel Vitushinkiy, Placing $$n$$ non-dominating queens on the $$n\times n$$ Chessboard, 2011, https://www.ffjm.org/upload/fichiers/N_NON_DOMINATING_QUEENS.pdf 3. Maximum Queens Chess Problem, https://www.gams.com/latest/gamslib_ml/queens.103 ## Saturday, December 8, 2018 ### More queens problems In [1] the famous $$n$$-queens problem was discussed: how many queens can we place on a chess-board such that non of them are attacked by others. Interestingly, we had some non-obvious problem with Cplex enumerating the different configurations (due to tolerance issues). A different problem is the Minimum Dominating Set of Queens Problem: Find the minimum number of queens needed such that they dominate each square. Here dominate means: either a square is under attack by at least one queen, or it is occupied by a queen. To develop is Mixed Integer Programming model, we introduce the familiar binary variables $x_{i,j} = \begin{cases} 1 & \text{if square $$(i,j)$$ is occupied by a queen}\\ 0 & \text{otherwise}\end{cases}$ A square $$(i,j)$$ is dominated if: 1. $$x_{i,j}=1$$: a queen is located here. This case will be covered by the following other cases, so we don't have to worry about this. 2. There is a queen in row $$i$$: $\sum_{j'} x_{i,j'} \ge 1$ 3. There is a queen in column $$j$$:$\sum_{i'} x_{i',j} \ge 1$ 4. There is a queen in the same diagonal. The diagonal is described by $$(i',j')$$ such that $$i'-j'=i-j$$. So we have: $\sum_{i',j'|i'-j'=i-j} x_{i',j'}\ge 1$ If you prefer you can write this as $\sum_{i'|1 \le i'-i+j \le n } x_{i',i'-i+j} \ge 1$ 5. There is a queen in the same anti-diagonal. The anti-diagonal is described by $$(i',j')$$ such that $$i'+j'=i+j$$. So we have: $\sum_{i',j'|i'+j'=i+j} x_{i',j'}\ge 1$ This can also be written as $\sum_{i'| 1 \le i+j-i' \le n} x_{i',i+j-i'} \ge 1$ We can write these conditions in one big constraint. So we can write: Minimum Dominating Set of Queens Problem \begin{align}\min& \>\color{DarkRed}z=\sum_{i,j} \color{DarkRed} x_{i,j}\\ & \sum_{j'} \color{DarkRed}x_{i,j'} + \sum_{i'} \color{DarkRed}x_{i',j} + \sum_{i',j'|i'-j'=i-j} \color{DarkRed}x_{i',j'} +\sum_{i',j'|i'+j'=i+j} \color{DarkRed}x_{i',j'} \ge 1 && \forall i,j \\ & \color{DarkRed}x_{i,j}\in\{0,1\} \end{align} For an $$8\times 8$$ board, we need 5 queens. Here are two solutions: You can verify that every unoccupied square is under attack. Note that this formulation actually does a bit of double counting. E.g. when we look at the constraint for cell $$(2,3)$$ we see: e(2,3).. x(1,2) + x(1,3) + x(1,4) + x(2,1) + x(2,2) + 4*x(2,3) + x(2,4) + x(2,5) + x(2,6) + x(2,7) + x(2,8) + x(3,2) + x(3,3) + x(3,4) + x(4,1) + x(4,3) + x(4,5) + x(5,3) + x(5,6) + x(6,3) + x(6,7) + x(7,3) + x(7,8) + x(8,3) =G= 1 ; (LHS = 0, INFES = 1 ****) We see that x(2,3) has a coefficient 4. This is harmless. However it looks a bit unpolished. With a little bit more careful modeling we can get rid of this coefficient 4. A complicated way would be: $\sum_{j'}x_{i,j'} + \sum_{i'|i'\ne i} x_{i',j} + \sum_{i',j'|i'-j'=i-j,i'\ne i, j'\ne j} x_{i',j'} +\sum_{i',j'|i'+j'=i+j,i'\ne i, j'\ne j} x_{i',j'} \ge 1 \>\> \forall i,j$ The first summation includes $$x_{i,j}$$ while the other three summations exclude explicitly an occurrence of $$x_{i,j}$$. Simpler is just to subtract $$3 x_{i,j}$$: $\sum_{j'} x_{i,j'} + \sum_{i'} x_{i',j} + \sum_{i',j'|i'-j'=i-j} x_{i',j'} +\sum_{i',j'|i'+j'=i+j} x_{i',j'}-3x_{i,j} \ge 1\>\> \forall i,j$ Here we just compensate for the double counting. Now we see: e(2,3).. x(1,2) + x(1,3) + x(1,4) + x(2,1) + x(2,2) + x(2,3) + x(2,4) + x(2,5) + x(2,6) + x(2,7) + x(2,8) + x(3,2) + x(3,3) + x(3,4) + x(4,1) + x(4,3) + x(4,5) + x(5,3) + x(5,6) + x(6,3) + x(6,7) + x(7,3) + x(7,8) + x(8,3) =G= 1 ; (LHS = 0, INFES = 1 ****) When we ask: how many different solution are there?, we use again the Cplex solution pool. And again, we have our tolerance problem: pool.absgapNumber of solutions 02 0.54860 As the objective is integer valued, in some sense an absolute gap tolerance of 0.5 seems the safest. With this we find the correct number of solutions [2]. See [1] for a more in-depth discussion of this tolerance problem. #### References 1. Chess and solution pool, http://yetanothermathprogrammingconsultant.blogspot.com/2018/11/chess-and-solution-pool.html. 2. Weisstein, Eric W.,  Queens Problem, http://mathworld.wolfram.com/QueensProblem.html 3. Henning Fernau, minimum dominating set of queens: A trivial programming exercise?, Discrete Applied Mathematics, Volume 158, Issue 4, 28 February 2010, Pages 308-318. This is about programming instead of modeling. The issues are very different. 4. Saad Alharbi and, Ibrahim Venkat, A Genetic Algorithm Based Approach for Solving the Minimum Dominating Set of Queens Problem, Hindawi Journal of Optimization, Volume 2017. I am not sure why a meta-heuristic is a good choice for solving this problem: you will not find proven optimal solutions. ## Tuesday, November 27, 2018 ### Chess and solution pool The $$n$$-queens problem is a popular example of a "chessboard non-attacking problem" [1,2]: • On an $$n \times n$$ chessboard place as many queens as possible such that none of them is attacked • Given this, how many different ways can we place those queens? We use the usual $$8 \times 8$$ board. #### Rooks A related, simpler problem is about placing rooks. The optimization problem is very simple: it is an assignment problem: n-Rooks Problem \begin{align}\max&\> \color{DarkRed}z=\sum_{i,j} \color{DarkRed} x_{i,j}\\ & \sum_j \color{DarkRed}x_{i,j} \le 1 && \forall i\\& \sum_i \color{DarkRed}x_{i,j} \le 1 && \forall j\\ & \color{DarkRed}x_{i,j}\in\{0,1\} \end{align} For an $$8 \times 8$$ board it is obvious that the number of rooks we can place is 8. Here we show two configurations: We can easily answer: how many alternative configuration with 8 rooks can we find? This is $n! = 8! = 40,320$ Basically, we enumerate all row and column permutations of the diagonal depicted above. We can use the solution pool in solvers like Cplex and Gurobi to find all of them. Here I use Cplex and the performance is phenomenal: all 40,320 solutions are found in 16 seconds. #### Board In the models in this write-up, I assume a board that has the coordinates: Coordinate System in Models This also means that the diagonals: Main diagonal and anti-diagonal have the form $$\{(i,j)|i=j\}$$ and $$\{(i,j)|i+j=9\}$$. More generally, for the downward sloping main diagonals we have the rule: $i=j+k$ for $$k=-6,\dots,6$$. The anti-diagonals can be described as: $i+j=k$ for $$k=3,\dots,15$$. We can illustrate this with: Diagonals and anti-diagonals The first picture has cell values $$a_{i,j}=i-j$$ and the second one has cell values $$a_{i,j}=i+j$$. #### Bishops The model for placing bishops is slightly more complicated: we need to check the main diagonals and the anti-diagonals. We use the notation from the previous paragraph. So with this, our model can look like: n-Bishops Problem \begin{align}\max& \>\color{DarkRed}z=\sum_{i,j} \color{DarkRed} x_{i,j}\\ & \sum_{i-j=k} \color{DarkRed}x_{i,j} \le 1 && k=-(n-2),\dots,(n-2)\\& \sum_{i+j=k} \color{DarkRed}x_{i,j} \le 1 && k=3,\dots,2n-1\\ & \color{DarkRed}x_{i,j}\in\{0,1\} \end{align} Note that these constraints translate directly into GAMS: bishop_main(k1).. sum((i,j)$(v(i)-v(j)=v(k1)),x(i,j)) =l= 1; bishop_anti(k2).. sum((i,j)$(v(i)+v(j)=v(k2)),x(i,j)) =l= 1; Two possible solutions are: We can accommodate 14 bishops. There are 256 different solutions (according to Cplex). Note: the GAMS display of the solution is not so useful: ---- 47 VARIABLE x.L 1 2 4 5 7 8 1 1 1 1 1 2 1 3 1 1 6 1 1 7 1 8 1 1 1 1 For this reason I show the solution using the $$\LaTeX$$ chessboard package [5]. You should be aware that the coordinate system is different than what we use in the models. #### Queens The $$n$$ queens problem is now very simple: we just need to combine the two previous models. n-Queens Problem \begin{align}\max&\> \color{DarkRed}z= \sum_{i,j} \color{DarkRed} x_{i,j}\\ & \sum_j \color{DarkRed}x_{i,j} \le 1 && \forall i\\& \sum_i \color{DarkRed}x_{i,j} \le 1 && \forall j\\ & \sum_{i-j=k} \color{DarkRed}x_{i,j} \le 1 && k=-(n-2),\dots,(n-2)\\& \sum_{i+j=k} \color{DarkRed}x_{i,j} \le 1 && k=3,\dots,2n-1\\ & \color{DarkRed}x_{i,j}\in\{0,1\} \end{align} A solution can look like: We can place 8 queens and there are 92 solutions. Notes: • Cplex delivered 91 solutions. This seems to be a tolerance issue. I used a solution pool absolute gap of zero (option SolnPoolAGap=0 or as it is called recently: mip.pool.absgap=0) [4]. As the objective jumps by one, it is safe to set the absolute gap to 0.01. With this gap we found all 92 solutions. Is this a bug? Maybe. Possibly. Probably. This 92-th solution is not supposed to be cut off. Obviously, poor scaling is not an issue here: this model is as well-scaled as you can get. I suspect that the (somewhat poorly understood) combination of tolerances (feasibility, integer, optimality) causes Cplex to behave this way. If too many binary variables assume 0.99999 (say within the integer tolerance) we have an objective which is too small. Indeed, we can also get to 92 solutions by setting the integer tolerance epint=0.  Note that setting the integer tolerance to zero will often increase solution times. Often tolerances "help" us: they make it easier to find feasible or optimal solutions. Here is a case when they really cause problems. Of course this discussion is not a good advertisement for Mixed Integer Programming models. Preferably we should not have to worry about technical details such as tolerances when building and solving models that are otherwise fairly straightforward (no big-M's, well-scaled, small in size). • A two step algorithm can help to prevent the tolerance issue discussed above: 1. Solve the maximization problem: we find max number of queens $$z^*$$. 2. Create a feasibility problem with number of queens equal to $$z^*$$. I.e. we add the constraint: $\sum_{i,j} x_{i,j}=z^*$ This problem has no objective. Find all feasible integer solutions using the solution pool. We don't need to set a solution pool gap for this. On the surface, this looks just like the method above. However, physically having the constraint $$\sum x_{i,j}=z^*$$ in the model will make sure all relaxations obey this constraints. So any binary variables that are automatically integer (i.e. close to 0 or 1) will never cause us to deviate too much from $$z^*$$. This is subtle. • Many of the 92 solutions are the result of simple symmetric operations, such as rotating the board, or reflection [2]. The number of different solutions after ignoring these symmetries is just 12. The GAMS model in [3] finds those. • The model in [3] handles the bishop constraints differently, by calculating an offset and writing $\sum_i x_{i,i +\mathit{sh}_s} \le 1 \>\>\forall s$ I somewhat prefer our approach. #### Kings Kings are handled quite cleverly in [1]. They observe that there cannot be more than one king in each block $\begin{matrix} x_{i,j} & x_{i,j+1}\\ x_{i+1,j} & x_{i+1,j+1}\end{matrix}$ Note that we only have to look forward (and downward) as a previous block would have covered things to the left (or above). The model can look like: n-Kings Problem \begin{align}\max& \>\color{DarkRed}z=\sum_{i,j} \color{DarkRed} x_{i,j}\\ & \color{DarkRed}x_{i,j}+\color{DarkRed}x_{i+1,j}+\color{DarkRed}x_{i,j+1}+\color{DarkRed}x_{i+1,j+1}\le 1 && \forall i,j\le n-1\\ & \color{DarkRed}x_{i,j}\in\{0,1\} \end{align} Two solutions are: We can place 16 kings, and there are a lot of possible configurations: 281,571 (says Cplex). #### Knights To place knights we set up a set $$\mathit{jump}_{i,j,i',j'}$$ indicating if we can jump from cell $$(i,j)$$ to cell $$(i',j')$$. We only need to look forward, so for each $$(i,j)$$ we need to consider four cases: $$j$$ $$j+1$$ $$j+2$$ $$i-2$$ $$x_{i-2,j+1}$$ $$i-1$$ $$x_{i-1,j+2}$$ $$i$$ $$x_{i,j}$$ $$i+1$$ $$x_{i+1,j+2}$$ $$i+2$$ $$x_{i+2,j+1}$$ Note that near the border we may have fewer than four cases. In GAMS we can populate the set $$\mathit{jump}$$ straightforwardly: jump(i,j,i-2,j+1) = yes; jump(i,j,i-1,j+2) = yes; jump(i,j,i+1,j+2) = yes; jump(i,j,i+2,j+1) = yes; The model can look like: n-Knights Problem \begin{align}\max& \>\color{DarkRed}z=\sum_{i,j} \color{DarkRed} x_{i,j}\\ & \color{DarkRed}x_{i,j}+\color{DarkRed}x_{i',j'}\le 1 && \forall i,j,i',j'|\mathit{jump}_{i,j,i',j'}\\ & \color{DarkRed}x_{i,j}\in\{0,1\} \end{align} We can place 32 knights. There are only two different solutions: #### Conclusion The solution pool is a powerful tool to enumerate possibly large numbers of integer solutions. However, with default tolerances, setting the solution pool absolute gap tolerance to zero may cause perfectly good integer solutions to be missed. This is dicey stuff. #### References 1. L. R. Foulds and D. G. Johnston, An Application of Graph Theory and Integer Programming: Chessboard Non-Attacking Puzzles, Mathematics Magazine Vol. 57, No. 2 (Mar., 1984), pp. 95-104 2. Eight queens puzzle, https://en.wikipedia.org/wiki/Eight_queens_puzzle 3. Maximum Queens Chess Problem, https://www.gams.com/latest/gamslib_ml/queens.103 4. Absolute gap for solution pool, https://www.ibm.com/support/knowledgecenter/en/SSSA5P_12.7.0/ilog.odms.cplex.help/CPLEX/Parameters/topics/SolnPoolAGap.html 5. chessboard – Print chess boards, https://ctan.org/pkg/chessboard 6. Danna E., Fenelon M., Gu Z., Wunderling R. (2007) Generating Multiple Solutions for Mixed Integer Programming Problems. In: Fischetti M., Williamson D.P. (eds) Integer Programming and Combinatorial Optimization. IPCO 2007. Lecture Notes in Computer Science, vol 4513. Springer, Berlin, Heidelberg ## Monday, November 19, 2018 ### Solving many scenarios In [1] a problem is described: Select 2 teams of 6 players from a population of 24 players such that the average ELO rating of each team is a close as possible. It came as a surprise to me, given some random data for the ratings, I was always able to find two teams with exactly the same average rating. When we try this for say 1,000 cases, we have to solve 1,000 independent scenarios, each of them solving a small MIP model. It is interesting to see how we can optimize this loop using GAMS. ApproachDescriptionElapsed Time mm:ss A. Standard loopSolve 1,000 models loop(k, r(i) = round(normal(1400,400)); solve m minimizing z using mip; trace(k,j) = avg.l(j); trace(k,'obj') = z.l; ); 12:12 B. Optimized loopReduce printing to listing file and use DLL interface. m.solvelink=5; m.solprint=2; 3:33 C. Scenario solverSolve 1,000 scenarios by updating model. Solver threads: 1. solve m min z using mip scenario dict; 1:32 D. Scenario solverTell MIP solver to use 4 threads m.threads=4; 2:19 E. Asynchronous scenario solverEach asynchronous solve calls scenario solver with 250 scenarios m.solveLink = 3; loop(batch, ksub(k) = batchmap(batch,k); solve m min z using mip scenario dict; h(batch) = m.handle; ); Note that these solves operate asynchronously. 0:25 F. Combine into one problemAdd index $$k$$ to each variable and equation and solve as one big MIP (see below for details). Use 4 threads.7:22 The standard loop is the most self-explanatory. But we incur quite some overhead. In each cycle we have: 1. GAMS generates the model and prints equation and column listing 2. GAMS shuts down after saving its environment 3. The solver executable is  called 4. GAMS restarts and reads back the environment 5. GAMS reads the solution and prints it The optimized loop will keep GAMS in memory and calls the solver as a DLL instead of an executable. In addition printing is omitted. GAMS is still generating each model, and a solver is loading and solving the model from scratch. The scenario solver will keep the model stored inside the solver and applies updates. In essence the GAMS loop is moved closer to the solver. To enable the scenario solver, we calculate all random ratings in advance: rk(k,i) = round(normal(1400,400)); As these are very small MIP models, adding solver threads to solve each model is not very useful. In our case it was even slightly worsening things. For small models most of the work is sequential (presolve, scaling, preprocessing etc) and there is also some overhead in doing parallel MIP (fork, wait, join). Approach E is to setup asynchronous scenario solves. We split the 1,000 scenarios in 4 batches of 250 scenarios. We then solve each batch in parallel and use the scenario solver to solve a batch using 1 solver thread. This more coarse-grained parallelism is way more effective than using multiple threads inside the MIP solver. With this approach we can solve on average 40 small MIP models per second, which I think is pretty good throughput. A disadvantage is that the tools for this are rather poorly designed and the setup for this is quite cumbersome: different loops are needed to launch solvers asynchronously and retrieving results after threads terminate. There is still some room for further improvements. Using parallel scenario solves we have to assign scenarios to threads in advance. It is possible that one or more threads finish their workload early. This type of starvation is difficult prevent with parallel scenario solves (basically the scenario solver should become more capable and be able to handle multiple threads). Finally, I also tried to combine all scenarios into one big MIP model. I.e. we have: Single scenarioCombined model \begin{align}\min\>& \color{DarkRed} z \\ & \sum_j \color{DarkRed} x_{i,j} \le 1 && \forall i\\ & \sum_i \color{DarkRed} x_{i,j} = \color{DarkBlue} n && \forall j \\ & \color{DarkRed}{\mathit{avg}}_j = \frac{\displaystyle \sum_i \color{DarkBlue} r_i \color{DarkRed} x_{i,j}}{\color{DarkBlue} n} \\ & - \color{DarkRed} z \le \color{DarkRed}{\mathit{avg}}_2 - \color{DarkRed}{\mathit{avg}}_1 \le \color{DarkRed} z \\ & \color{DarkRed}x_{i,j} \in \{0,1\}\end{align} \begin{align}\min\>& \color{DarkRed} z_{total} = \sum_k \color{DarkRed} z_k \\ & \sum_j \color{DarkRed} x_{i,j,k} \le 1 && \forall i,k\\ & \sum_i \color{DarkRed} x_{i,j,k} = \color{DarkBlue} n && \forall j,k \\ & \color{DarkRed}{\mathit{avg}}_{j,k} = \frac{\displaystyle \sum_i \color{DarkBlue} r_{i,k} \color{DarkRed} x_{i,j,k}}{\color{DarkBlue} n} \\ & - \color{DarkRed} z_k \le \color{DarkRed}{\mathit{avg}}_{2,k} - \color{DarkRed}{\mathit{avg}}_{1,k} \le \color{DarkRed} z_k \\ & \color{DarkRed}x_{i,j,k} \in \{0,1\}\end{align} Equations: 30 Variables: 51 Binary variables: 48 Equations: 30,000 Variables: 51,000 Binary variables: 48,000 The combined model is a large MIP model. It is faster than our first loop (too much overhead in the looping). But as soon we get the overhead under control, we see again that solving $$K=1,000$$ small models is better than solving one big models that is $$K$$ times as large. It is noted that this combined model can be viewed as having a block-diagonal structure. After reordering the rows and the columns, the LP matrix can look like: Structure of the combined model (after reordering) LP solvers do not really try to exploit this structure and just look at it as a big, sparse matrix. This looks like a somewhat special case: many, independent, very small MIP models. However, I have been involved in projects dealing with multi-criteria design problems that had similar characteristics. Conclusions: • The end result: even for 1000 random scenarios, we can find in each case two teams that have exactly the same average ELO rating. • Solving many different independent scenarios may require some attention to achieve best performance. ## Sunday, November 11, 2018 ### Selecting Chess Players In [1] a simple problem was posted: • We have a population of $$N=24$$ players, each with an ELO rating $$r_i$$ • We need to select $$2 \times 6$$ players for 2 teams (each team has $$n=6$$ members). • We want to minimize the difference in average ELO ratings of the teams. The poster asked for an algorithm. But, of course, this looks like a problem we can solve as a mathematical programming model. First I generated some random data: ---- 11 PARAMETER r ELO rating player1 1275, player2 1531, player3 1585, player4 668, player5 1107, player6 1011 player7 1242, player8 1774, player9 1096, player10 1400, player11 1036, player12 1538 player13 1135, player14 1206, player15 2153, player16 1112, player17 880, player18 850 player19 1528, player20 1875, player21 939, player22 1684, player23 1807, player24 1110 I have no idea if these numbers are realistic or not. It make sense to look at this from an assignment problem point of view. It is amazing how often this concept is encountered in modeling. So we define:$x_{i,j}=\begin{cases} 1 & \text{if player i is assigned to team j}\\ 0 & \text{otherwise}\end{cases}$ A high-level model can look like: High-level Model \begin{align}\min\>& | \color{DarkRed}{\mathit{avg}}_2 - \color{DarkRed}{\mathit{avg}}_1 | \\ & \sum_j \color{DarkRed} x_{i,j} \le 1 && \forall i\\ & \sum_i \color{DarkRed} x_{i,j} = \color{DarkBlue} n && \forall j \\ & \color{DarkRed}{\mathit{avg}}_j = \frac{\displaystyle \sum_i \color{DarkBlue} r_i \color{DarkRed} x_{i,j}}{\color{DarkBlue} n} \\ & \color{DarkRed}x_{i,j} \in \{0,1\}\end{align} As you can see, this model is largely an assignment problem plus some average calculations. Notes: • The absolute value is easily linearized in different ways: • Bounding \begin{align} \min\> & z \\ & -z \le \mathit{avg}_2-\mathit{avg}_1\le z\end{align} • Variable splitting: \begin{align} \min\> & z^{+}+z^{-} \\ & z^{+}-z^{-} = \mathit{avg}_2-\mathit{avg}_1 \\ & z^{+},z^{-}\ge 0 \end{align} • Removing symmetry. We can require that the average rating of team 1 is not lower than of team 2. Now we just can minimize difference: \begin{align} \min\> & \mathit{avg}_1 - \mathit{avg}_2\\ & \mathit{avg}_1 \ge \mathit{avg}_2 \end{align} • We can use the sum instead of the average Surprisingly, the solution looks like: ---- 43 VARIABLE x.L assignment team1 team2 player1 1.000 player2 1.000 player4 1.000 player5 1.000 player6 1.000 player7 1.000 player8 1.000 player9 1.000 player10 1.000 player11 1.000 player17 1.000 player18 1.000 ---- 43 VARIABLE avg.L average rating of team team1 1155.833, team2 1155.833 ---- 43 PARAMETER report solution report team1 team2 player1 1275.000 player2 1531.000 player4 668.000 player5 1107.000 player6 1011.000 player7 1242.000 player8 1774.000 player9 1096.000 player10 1400.000 player11 1036.000 player17 880.000 player18 850.000 sum 6935.000 6935.000 avg 1155.833 1155.833 We achieved a perfect match! The probability of this must be somewhat low. Well, no. If we try this 10 times with different random $$r_i$$, we get 10 times a perfect match. ---- 50 PARAMETER trace results for each solve avg1 avg2 k1 1155.833 1155.833 k2 1583.333 1583.333 k3 1385.333 1385.333 k4 1258.500 1258.500 k5 1423.167 1423.167 k6 1491.833 1491.833 k7 1262.167 1262.167 k8 1736.167 1736.167 k9 1514.167 1514.167 k10 1483.667 1483.667 Using a solve loop of length 100 gives the same result. It looks that we almost always can form two teams with equal average ELO rating. I suspect it will be very difficult to derive anything analytically about the probability of not being able to find a perfect match. I was surprised by these results. #### Discussion Often, people with a Computer Science/programming background immediately think about algorithms instead of mathematical models. They miss out on a paradigm that can be very powerful and efficient  (in terms of time needed to design, implement and maintain an algorithm or model). This model was thought out, implemented and tested in less than 20 minutes (including finding out what this ELO is). I am sure developing some special purpose algorithm will take much more time. In addition, for this model, the solver will find proven optimal solutions while developing an algorithm yourself will likely result in some heuristic without any concept of optimality. ## Saturday, November 10, 2018 ### Quadratic Programming with Binary Variables Quadratic Programming models where the quadratic terms involve only binary variables are interesting from a modeling point view: we can apply different reformulations. Let's have a look at the basic model: \begin{align}\min\>& \color{DarkRed}x^{T} \color{DarkBlue}Q \color{DarkRed}x + \color{DarkBlue} c^{T}\color{DarkRed}x\\ & \color{DarkRed}x_i \in \{0,1\}\end{align} Only if the matrix $$Q$$ is positive definite we have a convex problem. So, in general, the above problem is non-convex. To keep things simple, I have no constraints and no additional continuous variables (adding those does not not really change the story). #### Test data To play a bit a with this model, I generated random data: • Q is about 25% dense (i.e. about 75% of the entries $$q_{i,j}$$ are zero). The nonzero entries are drawn from a uniform distribution between -100 and 100. • The linear coefficients are uniformly distributed $$c_i \sim U(-100,100)$$. • The size of the model is: $$n=75$$ (i.e. 75 binary variables). This is relative small, so the hope is we can solve this problem quickly. As we shall see the results will be very mixed. #### Local MINLP solvers Many local MINLP solvers tolerate non-convex problems, but they will not produce a global optimum. So we see: SolverObjectiveTimeNotes SBB -7558.62350.5Local optimum Knitro-7714.5721 0.4Id. Bonmin-7626.79751.3Id. All solvers used default settings and timings are in seconds. It is not surprising that these local solvers find different local optima. For all solvers, the relaxed solution was almost integer and just a few nodes were needed to produce an integer solution. This looks promising. Unfortunately, we need to contain our optimism. #### Global MINLP Solvers Global MINLP solvers are in theory well-equipped to solve this model. Unfortunately, they are usually quite slow. For this example, we see a very wide performance range: SolverObjectiveTimeNotes Baron-7760.177182 Couenne-7646.5987>3600Time limit, gap 25% Antigone-7760.1771252 Couenne is struggling with this model. Baron and Antigone are doing quite good on this model. We can further observe that the local solvers did not find the global optimal solution. #### MIQP Solvers If we just use an MIQP solver, we may get different results, depending on the solvers. If the solver expects a convex model, it will refuse to solve the model. Other solvers may use some automatic reformulation. Let's try a few: SolverObjectiveTimeNotes MosekQ not positive definite Cplex-7760.1771 27Automatically reformulated to a MIP Gurobi -7760.1760>9999Time limit, gap 37% (Gurobi 8.0) Most solvers have options to influence what reformulations are applied. Here we ran with default settings. MIQP solvers tend to have many options, including those that influence automatic reformulations. I just used defaults, assuming "the solver knows best what to do". The global MINLP solvers Baron and Antigone did not do bad at all. It is noted that Gurobi 8.1 has better MIQP performance [2] (hopefully it does much better than what we see here). It is noted that we can force Gurobi to linearize the MIQP model using the solver option preqlinearize 1, and in that case it solves fast. #### Perturb Diagonal For borderline non-convex models, it is not unusual to see messages from a quadratic solver that the diagonal of $$Q$$ has been perturbed to make the problem convex. Here we do the same thing in the extreme [1]. Background: a matrix $$Q$$ is positive definite (positive semi-definite) if all eigenvalues $$\lambda_i \gt 0$$ ($$\lambda_i\ge 0$$). If there are negative eigenvalues, we can conclude $$\min x^TQx$$ is a non-convex problem. From this we see that the sign of the smallest eigenvalue $$\lambda_{min}$$ plays an important role. To calculate the smallest eigenvalue we first have to make $$Q$$ symmetric (otherwise we would get complex eigenvalues). This can easily be done by replacing $$Q$$ by $$0.5(Q^T+Q)$$. This operation will not change the values of the quadratic form $$x^TQx$$. If after calculating the smallest eigenvalue $$\lambda_{min}$$, we observe $$\lambda_{min} \lt 0$$, we can form  $\widetilde{Q} = Q - \lambda_{min} I$ Note that we actually add a positive number to the diagonal as  $$\lambda_{min}\lt 0$$. To compensate we need to add to the objective a linear term of the form $\sum_i \lambda_{min} x_i^2 = \sum_i \lambda_{min} x_i$ (for binary variables we have $$x_i^2=x_i$$). With this trick, we made the problem convex. For our data set we have $$\lambda_{min} = -353.710$$. To make sure we are becoming convex, I added a very generous tolerance: $$\lambda_{min}-1$$. So I used: $$\widetilde{Q} = Q - (\lambda_{min}-1) I$$. Convexified Model \begin{align}\min\>& \color{DarkRed} x^T \left( \color{DarkBlue} Q - (\lambda_{min}-1) I \right) \color{DarkRed} x + \left(\color{DarkBlue} c + (\lambda_{min}-1) \right)^T \color{DarkRed} x \\ & \color{DarkRed}x_i \in \{0,1\}\end{align} With this reformulation we obtained a convex MIQP. This means for instance that a solver like Mosek is back in play, and that local solvers will produce global optimal solutions. Let's try: SolverObjectiveTimeNotes Mosek-7760.1771725 Knitro-7760.1771 2724Node limit, gap: 3% Bonmin-7760.1771>3600Time limit, gap: 6% These results are a little bit slower than I expected, especially when comparing to the performance of the global solvers Baron and Antigone. These results are also much slower than the first experiment with local solvers where we found integer feasible local solutions very fast. Note. We could have started by removing all diagonal elements from $$Q$$ and moving them into $$c$$. This is again based on the fact that $$x_i^2 = x_i$$.  I did not do this step in this experiment. #### Linearization We already saw that some solvers (such as Cplex) apply a linearization automatically. Of course we can do this ourselves. The first thing we can do to help things along is to make $$Q$$ a triangular matrix. We can do this by: $\tilde{q}_{i,j} = \begin{cases} q_{i,j}+q_{j,i} & \text{if i \lt j} \\ q_{i,j} & \text{if i=j}\\ 0 & \text{if i \gt j}\end{cases}$ The next thing to do is to introduce variables $$y_{i,j} = x_i x_j$$. This binary multiplication can be linearized easily: \begin{align} & y_{i,j} \le x_i \\ & y_{i,j} \le x_j \\ & y_{i,j} \ge x_i + x_j -1 \\ & 0 \le y_{i,j} \le 1 \end{align} In the actual model, we can skip a few of these inequalities by observing in which directions the objective pushes variables $$y_{i,j}$$ (see [1]). Linearized Model \begin{align} \min\>& \sum_{i,j|i\lt j} \color{DarkBlue}{\tilde{q}}_{i,j} \color{DarkRed} y_{i,j} + \sum_i \left( \color{DarkBlue} {\tilde{q}}_{i,i} + \color{DarkBlue} c_i \right) \color{DarkRed} x_i \\ & \color{DarkRed}y_{i,j} \le \color{DarkRed}x_i && \forall i\lt j, \color{DarkBlue} {\tilde{q}}_{i,j} \lt 0 \\ & \color{DarkRed}y_{i,j} \le \color{DarkRed}x_j && \forall i\lt j, \color{DarkBlue} {\tilde{q}}_{i,j} \lt 0 \\ & \color{DarkRed}y_{i,j} \ge \color{DarkRed}x_i +\color{DarkRed}x_j -1 && \forall i\lt j, \color{DarkBlue} {\tilde{q}}_{i,j} \gt 0 \\ & 0 \le \color{DarkRed}y_{i,j} \le 1 && \forall i\lt j, \color{DarkBlue} {\tilde{q}}_{i,j} \ne 0 \\ & \color{DarkRed}x_i \in \{0,1\} \\ \end{align} This model does not care whether the original problem is convex or not. Let's see how this works: SolverObjectiveTimeNotes Cplex-7760.177141 CBC-7760.1771 6488 It is known this MIP is not so easy to solve. A commercial MIP solver may be required to get good solution times. Here we see that Cplex (commercial) is doing much better than CBC (open source). #### Conclusion The problem under consideration: an unconstrained MIQP with just $$n=75$$ binary variables, is not that easy to solve. The overall winning strategy is to use a commercial MIP solver against a manually or automatically reformulated MIP model. Solving the MIQP directly is just very difficult for many solvers. The global solver Baron does a surprisingly good job. It is noted that if the data or the problem size changes, these performance figures may shift (a lot). #### Update An earlier version of this post had a much slower performance for Cplex MIQP. When rerunning this, I could not reproduce this, so this must have been a note taking error on my side (I suspect I was comparing with a result with $$n=100$$). Now, Cplex MIQP and Cplex MIP on the manually reformulated model perform comparable. My faith in Cplex automatic reformulation is fully restored (and my faith in my note taking skills further reduced). Apologies for this. #### References 1. Billionnet, A. and Elloumi, S., Using a mixed integer quadratic programming solver for the unconstrained quadratic 0-1 problem. Math. Program. 109 (2007) pp. 55–68 2. http://yetanothermathprogrammingconsultant.blogspot.com/2018/10/gurobi-81.html ## Wednesday, October 31, 2018 ### Strange objective In [1], a question was posted how to use the $$\mathit{sign}()$$ function in the SCIP solver. The problem to solve is argmax(w) sum(sign(Aw) == sign(b)) This is a strange objective. Basically: find $$w$$, with $$v=Aw$$ such that we maximize the number of  $$v_i$$ having the same sign as $$b_i$$. I have never seen such an objective. As $$A$$ and $$b$$ are constants, we can precompute $\beta_i = \mathrm{sign}(b_i)$ This simplifies the situation a little bit (but I will not need it below). A different way to say "$$v_i$$ and $$b_i$$ have the same sign" is to state: $v_i b_i > 0$ I assumed here $$b_i \ne 0$$. Similarly, the constraint $$v_i b_i < 0$$ means: "$$v_i$$ and $$b_i$$ have the opposite sign." If we introduce binary variables: $\delta_i = \begin{cases} 1 & \text{if v_i b_i > 0}\\ 0 & \text{otherwise}\end{cases}$ a model can look like: \begin{align} \max & \sum_i \delta_i \\ &\delta_i =1 \Rightarrow \sum_j a_{i,j} b_i w_j > 0 \\ & \delta_i \in \{0,1\}\end{align} The implication can be implemented using indicator constraints, so we have now a linear MIP model. Notes: • I replaced the $$\gt$$ constraint by $$\sum_j a_{i,j} b_i w_j \ge 0.001$$ • If the $$b_i$$ are very small or very large we can replace them by $$\beta_i$$, i.e. $$\sum_j a_{i,j} \beta_i w_j \gt 0$$ • The case where some $$b_i=0$$ is somewhat ignored here. In this model, we assume $$\delta_i=0$$ for this special case. • We can add explicit support for  $$b_i=0$$ by:  \begin{align} \max & \sum_i \delta_i \\ &\delta_i =1 \Rightarrow \sum_j a_{i,j} b_i w_j > 0 && \forall i | b_i\ne 0 \\ & \delta_i =1 \Rightarrow \sum_j a_{i,j} w_j = 0 && \forall i | b_i = 0 \\ & \delta_i \in \{0,1\}\end{align} • We could model this with binary variables or SOS1 variables. Binary variables require big-M values. It is not always easy to find good values for them. The advantage of indicator constraints is that they allow an intuitive formulation of the problem while not using big-M values. • Many high-end solvers (Cplex, Gurobi, Xpress, SCIP) support indicator constraints. Modeling systems like AMPL also support them. #### Test with small data set Let's do a test with a small random data set. ---- 17 PARAMETER a random matrix j1 j2 j3 j4 j5 i1 -0.657 0.687 0.101 -0.398 -0.416 i2 -0.552 -0.300 0.713 -0.866 4.213380E-4 i3 0.996 0.157 0.982 0.525 -0.739 i4 0.279 -0.681 -0.500 0.338 -0.129 i5 -0.281 -0.297 -0.737 -0.700 0.178 i6 0.662 -0.538 0.331 0.552 -0.393 i7 -0.779 0.005 -0.680 0.745 -0.470 i8 -0.428 0.188 0.445 0.256 -0.072 i9 -0.173 -0.765 -0.372 -0.907 -0.323 i10 -0.636 0.291 0.121 0.540 -0.404 i11 0.322 0.512 0.255 -0.432 -0.827 i12 -0.795 0.283 0.091 -0.937 0.585 i13 -0.854 -0.649 0.051 0.500 -0.644 i14 -0.932 0.170 0.242 -0.221 -0.283 i15 -0.514 -0.507 -0.739 0.867 -0.240 i16 0.567 -0.400 -0.749 0.498 -0.862 i17 -0.596 -0.990 -0.461 -2.97050E-4 -0.697 i18 -0.652 -0.339 -0.366 -0.356 0.928 i19 0.987 -0.260 -0.254 0.544 -0.207 i20 0.826 -0.761 0.471 -0.889 0.153 i21 -0.897 -0.988 -0.198 0.040 0.258 i22 -0.549 -0.208 -0.448 -0.695 0.873 i23 -0.155 -0.731 -0.228 -0.251 -0.463 i24 0.897 -0.622 -0.405 -0.851 -0.197 i25 -0.797 -0.232 -0.352 -0.616 -0.775 ---- 17 PARAMETER b random rhs i1 0.193, i2 0.023, i3 -0.910, i4 0.566, i5 0.891, i6 0.193, i7 0.215, i8 -0.275 i9 0.188, i10 0.360, i11 0.013, i12 -0.681, i13 0.314, i14 0.048, i15 -0.751, i16 0.973 i17 -0.544, i18 0.351, i19 0.554, i20 0.865, i21 -0.598, i22 -0.406, i23 -0.606, i24 -0.507 i25 0.293 ---- 17 PARAMETER beta sign of b i1 1.000, i2 1.000, i3 -1.000, i4 1.000, i5 1.000, i6 1.000, i7 1.000, i8 -1.000 i9 1.000, i10 1.000, i11 1.000, i12 -1.000, i13 1.000, i14 1.000, i15 -1.000, i16 1.000 i17 -1.000, i18 1.000, i19 1.000, i20 1.000, i21 -1.000, i22 -1.000, i23 -1.000, i24 -1.000 i25 1.000 ---- 53 VARIABLE w.L j1 -0.285, j2 -0.713, j3 -0.261, j4 -0.181, j5 -0.630 ---- 53 PARAMETER v sum(j, a(i,j)*w(j)) i1 0.005, i2 0.342, i3 -0.282, i4 0.556, i5 0.498, i6 0.256, i7 0.557, i8 -0.129 i9 1.059, i10 0.099, i11 0.076, i12 -0.197, i13 1.008, i14 0.299, i15 0.695, i16 0.771 i17 1.435, i18 0.003, i19 0.002, i20 0.249, i21 0.843, i22 -0.002, i23 0.962, i24 0.571 i25 1.084 ---- 53 VARIABLE delta.L i1 1.000, i2 1.000, i3 1.000, i4 1.000, i5 1.000, i6 1.000, i7 1.000, i8 1.000 i9 1.000, i10 1.000, i11 1.000, i12 1.000, i13 1.000, i14 1.000, i16 1.000, i18 1.000 i19 1.000, i20 1.000, i22 1.000, i25 1.000 ---- 53 VARIABLE z.L = 20.000 objective variable This means, for this 25 row problem we can find $$w$$'s such that 20 rows yield the same sign as $$b_i$$. #### References 1. SCIP What is the function for sign?, https://stackoverflow.com/questions/53030430/scip-what-is-the-function-for-sign ## Thursday, October 25, 2018 ### Gurobi 8.1 Quite some improvements in MIQP performance. Of course the smaller improvements in other model types also help: over time these things add up to substantial performance gains. #### Announcement Gurobi is pleased to announce the release of Gurobi Version 8.1. This latest version improves the overall performance of Gurobi Optimizer, and adds enhancements to Gurobi Instant Cloud, including support for Microsoft Azure® and for the latest Amazon Web Services® machines, and more. Version 8.1 demonstrates our commitment to delivering the new features our users request, and includes: Performance Improvements Gurobi Optimizer v8.1 continues to push the envelope of solver speed and performance. The overall v8.1 performance improvements versus v8.0 include: MIQP • More than a factor of 2.8x faster overall. • More than a factor of 6x faster on difficult models that take more than 100 seconds to solve. MIQCP • 38% faster overall. • 92% faster on difficult models that take more than 100 seconds to solve. LP • 2.9% faster overall in default settings. • 6.5% faster on difficult models that take more than 100 seconds to solve. LP barrier • 4.4% faster overall. • 11% faster on difficult models that take more than 100 seconds to solve. LP dual simplex • 4.2% faster overall. • 10.5% faster on difficult models that take more than 100 seconds to solve. Enhancements • Gurobi Instant Cloud now supports Microsoft Azure®: Instant Cloud users can now use Microsoft Azure, in several regions. • Gurobi Instant Cloud adds faster and more powerful machines on Amazon EC2®: The new version supports c5, r5 and z1 instance types. • New Q matrix linearization for MIQP and MIQCP models: We added a new Q matrix linearization approach in presolve. This new option can be chosen by setting parameter PreQLinearize to the new value of 2. • Improved Mac Installation Package: Users no longer need to install Xcode to perform the installation. • Support for Python 3.7: We have added support for Python 3.7 on Windows, Linux and Mac platforms. • A callback function for multi-objective optimization: We now let users terminate optimization for individual objectives for multi-objective MIP models. To learn more about all of the new features and improvements, visit What's New in Gurobi 8.1. ## Wednesday, October 17, 2018 ### The Muffin Problem We are asked to split $$m=5$$ muffins between $$s=3$$ students, such that each student gets in total $\frac{m}{s} = \frac{5}{3}$ worth of muffins [1]. From [2] we see two possible ways of doing this: Allocation 1 (from [2]) Allocation 2 (from [2]) (I believe Proc means Protocol). The problem is to find a way to divide the muffins such that the smallest piece is maximized. There is a nice and simple MIP formulation for this. Let's define $$x_{i,j} \in [0,1]$$ as the fraction of muffin $$i$$ assigned to student $$j$$. Also we need: $\delta_{i,j} = \begin{cases} 1 & \text{if x_{i,j} \gt 0} \\ 0 & \text{if x_{i,j}=0}\end{cases}$ Then we can write: Muffin Problem \begin{align}\max\> & \color{DarkRed} z \\ & \sum_i \color{DarkRed} x_{i,j} = \color{DarkBlue} {\frac{m}{s}} && \forall j\\ & \sum_j \color{DarkRed} x_{i,j} = 1 && \forall i\\ & \color{DarkRed} \delta_{i,j} = 0 \Rightarrow \color{DarkRed} x_{i,j} = 0 \\ & \color{DarkRed} \delta_{i,j} = 1 \Rightarrow \color{DarkRed} z \le \color{DarkRed} x_{i,j} \\ & 0 \le \color{DarkRed} x_{i,j} \le 1\\ & \color{DarkRed} \delta_{i,j} \in \{0,1\}\end{align} The objective takes care of $$x_{i,j}=0 \Rightarrow \delta_{i,j}=0$$. Some MIP solvers allow us to use the implications directly (as so-called indicator constraints). For others we need to reformulate. It is not difficult to rewrite them as inequalities: ImplicationInequality $\color{DarkRed} \delta_{i,j} = 0 \Rightarrow \color{DarkRed} x_{i,j} = 0$$\color{DarkRed} x_{i,j} \le \color{DarkRed} \delta_{i,j}$ $\color{DarkRed} \delta_{i,j} = 1 \Rightarrow \color{DarkRed} z \le \color{DarkRed} x_{i,j}$$\color{DarkRed} z \le \color{DarkRed} x_{i,j}+ (1-\color{DarkRed} \delta_{i,j})$ This is similar to what is proposed in [1] (they use $$y_{i,j} = 1-\delta_{i,j}$$) and somewhat simpler than the approach used in [3]. The results are: ---- 26 VARIABLE x.L fraction of muffin assigned to student student1 student2 student3 muffin1 0.500 0.500 muffin2 0.583 0.417 muffin3 0.417 0.583 muffin4 0.583 0.417 muffin5 0.417 0.583 ---- 26 VARIABLE d.L indicator for nonzero x student1 student2 student3 muffin1 1.000 1.000 muffin2 1.000 1.000 muffin3 1.000 1.000 muffin4 1.000 1.000 muffin5 1.000 1.000 ---- 26 VARIABLE z.L = 0.417 smallest nonzero x If student1 arrives early and confiscates muffin1, we can fix $$x_{\text{muffin1},\text{student1}}=1$$. With this we can reproduce the first solution: ---- 28 VARIABLE x.L fraction of muffin assigned to student student1 student2 student3 muffin1 1.000 muffin2 1.000 muffin3 0.333 0.667 muffin4 1.000 muffin5 0.333 0.667 ---- 28 VARIABLE z.L = 0.333 smallest nonzero x A solution for 7 muffins and 5 students can look like: ---- 26 VARIABLE x.L fraction of muffin assigned to student student1 student2 student3 student4 student5 muffin1 0.667 0.333 muffin2 0.333 0.667 muffin3 0.400 0.600 muffin4 0.333 0.333 0.333 muffin5 0.467 0.533 muffin6 0.467 0.533 muffin7 0.400 0.600 ---- 26 VARIABLE z.L = 0.333 smallest nonzero x Not all problems are super easy to solve to proven optimality. E.g. with 11 muffins and 9 students, I had to spend several minutes. As usual the solver quickly found the optimal solution, but proving optimality was not so quick. There is lots of symmetry in the model. That may be something to exploit.
#### 3.4Conditionals and Booleans ##### 3.4.1Motivating Example: Shipping Costs In Functions Practice: Cost of pens, we wrote a program (pen-cost) to compute the cost of ordering pens. Continuing the example, we now want to account for shipping costs. We’ll determine shipping charges based on the cost of the order. Specifically, we will write a function add-shipping to compute the total cost of an order including shipping. Assume an order valued at $10 or less ships for$4, while an order valued above $10 ships for$8. As usual, we will start by writing examples of the add-shipping computation. Do Now! Use the is notation from where blocks to write several examples of add-shipping. How are you choosing which inputs to use in your examples? Are you picking random inputs? Being strategic in some way? If so, what’s your strategy? Here is a proposed collection of examples for add-shipping. add-shipping(10) is 10 + 4 add-shipping(3.95) is 3.95 + 4 add-shipping(20) is 20 + 8 add-shipping(10.01) is 10.01 + 8 Do Now! What do you notice about our examples? What strategies do you observe across our choices? Our proposed examples feature several strategic decisions: • Including 10, which is at the boundary of charges based on the text • Including 10.01, which is just over the boundary • Including both natural and real (decimal) numbers • Including examples that should result in each shipping charge mentioned in the problem (4 and 8) So far, we have used a simple rule for creating a function body from examples: locate the parts that are changing, replace them with names, then make the names the parameters to the function. Do Now! What is changing across our add-shipping examples? Do you notice anything different about these changes compared to the examples for our previous functions? Two things are new in this set of examples: • The values of 4 and 8 differ across the examples, but they each occur in multiple examples. • The values of 4 and 8 appear only in the computed answers—not as an input. Which one we use seems to depend on the input value. These two observations suggest that something new is going on with add-shipping. In particular, we have clusters of examples that share a fixed value (the shipping charge), but different clusters (a) use different values and (b) have a pattern to their inputs (whether the input value is less than or equal to 10). This calls for being able to ask questions about inputs within our programs. ##### 3.4.2Conditionals: Computations with Decisions To ask a question about our inputs, we use a new kind of expression called an if expression. Here’s the full definition of add-shipping: fun add-shipping(order-amt :: Number) -> Number: doc: "add shipping costs to order total" if order-amt <= 10: order-amt + 4 else: order-amt + 8 end where: add-shipping(10) is 10 + 4 add-shipping(3.95) is 3.95 + 4 add-shipping(20) is 20 + 8 add-shipping(10.01) is 10.01 + 8 end In an if expression, we ask a question that can produce an answer that is true or false (here order-amt <= 10, which we’ll explain below in Booleans), provide one expression for when the answer to the question is true (order-amt + 4), and another for when the result is false (order-amt + 8). The else in the program marks the answer in the false case; we call this the else clause. We also need end to tell Pyret we’re done with the question and answers. ##### 3.4.3Booleans Every expression in Pyret evaluates in a value. So far, we have seen three types of values: Number, String, and Image. What type of value does a question like order-amt <= 10 produce? We can use the interactions prompt to experiment and find out. Do Now! Enter each of the following expressions at the interactions prompt. What type of value did you get? Do the values fit the types we have seen so far? 3.95 <= 10 20 <= 10 The values true and false belong to a new type in Pyret, called Boolean.Named for George Boole. While there are an infinitely many values of type Number, there are only two of type Boolean: true and false. Exercise What would happen if we entered order-amt <= 10 at the interactions prompt to explore booleans? Why does that happen? ##### 3.4.3.1Other Boolean Operations There are many other built-in operations that return Boolean values. Comparing values for equality is a common one: There is much more we can and should say about equality, which we will do later [Re-Examining Equality]. 1 == 1 true 1 == 2 false "cat" == "dog" false "cat" == "CAT" false In general, == checks whether two values are equal. Note this is different from the single = used to associate names with values in the directory. The last example is the most interesting: it illustrates that strings are case-sensitive, meaning individual letters must match in their case for strings to be considered equal.This will become relevant when we get to tables later. Sometimes, we also want to compare strings to determine their alphabetical order. Here are several examples: "a" < "b" true "a" >= "c" false "that" < "this" true "alpha" < "beta" true which is the alphabetical order we’re used to; but others need some explaining: "a" >= "C" true "a" >= "A" true These use a convention laid down a long time ago in a system called ASCII.Things get far more complicated with non-ASCII letters: e.g., Pyret thinks "Ł" is > than "Z", but in Polish, this should be false. Worse, the ordering depends on location (e.g., Denmark/Norway vs. Finland/Sweden). Do Now! Can you compare true and false? Try comparing them for equality (==), then for inequality (such as <). In general, you can compare any two values for equality (well, almost, we’ll come back to this later); for instance: "a" == 1 false If you want to compare values of a specific kind, you can use more specific operators: num-equal(1, 1) true num-equal(1, 2) false string-equal("a", "a") true string-equal("a", "b") false Why use these operators instead of the more generic ==? Do Now! Try num-equal("a", 1) string-equal("a", 1) Therefore, it’s wise to use the type-specific operators where you’re expecting the two arguments to be of the same type. Then, Pyret will signal an error if you go wrong, instead of blindly returning an answer (false) which lets your program continue to compute a nonsensical value. There are even more Boolean-producing operators, such as: wm = "will.i.am" string-contains(wm, "will") true Note the capital W. string-contains(wm, "Will") false In fact, just about every kind of data will have some Boolean-valued operators to enable comparisons. ##### 3.4.3.2Combining Booleans Often, we want to base decisions on more than one Boolean value. For instance, you are allowed to vote if you’re a citizen of a country and you are above a certain age. You’re allowed to board a bus if you have a ticket or the bus is having a free-ride day. We can even combine conditions: you’re allowed to drive if you are above a certain age and have good eyesight andeither pass a test or have a temporary license. Also, you’re allowed to drive if you are not inebriated. Corresponding to these forms of combinations, Pyret offers three main operations: and, or, and not. Here are some examples of their use: (1 < 2) and (2 < 3) true (1 < 2) and (3 < 2) false (1 < 2) or (2 < 3) true (3 < 2) or (1 < 2) true not(1 < 2) false Exercise Explain why numbers and strings are not good ways to express the answer to a true/false question. Shipping costs are rising, so we want to modify the add-shipping program to include a third shipping level: orders between $10 and$30 ship for $8, but orders over$30 ship for $12. This calls for two modifications to our program: • We have to be able to ask another question to distinguish situations in which the shipping charge is 8 from those in which the shipping charge is 12. • The question for when the shipping charge is 8 will need to check whether the input is between two values. We’ll handle these in order. The current body of add-shipping asks one question: order-amt <= 10. We need to add another one for order-amt <= 30, using a charge of 12 if that question fails. Where do we put that additional question? An expanded version of the if-expression, using else if, allows you to ask multiple questions: fun add-shipping(order-amt :: Number) -> Number: doc: "add shipping costs to order total" if order-amt <= 10: order-amt + 4 else if order-amt <= 30: order-amt + 8 else: order-amt + 12 end where: ... end At this point, you should also add where examples that use the 12 charge. How does Pyret determine which answer to return? It evaluates each question expression in order, starting from the one that follows if. It continues through the questions, returning the value of the answer of the first question that returns true. Here’s a summary of the if-expression syntax and how it evaluates. if QUESTION1: <result in case first question true> else if QUESTION2: <result in case QUESTION1 false and QUESTION2 true> else: <result in case both QUESTIONs false> end A program can have multiple else if cases, thus accommodating an arbitrary number of questions within a program. Do Now! The problem description for add-shipping said that orders between 10 and 30 should incur an 8 charge. How does the above code capture “between”? This is currently entirely implicit. It depends on us understanding the way an if evaluates. The first question is order-amt <= 10, so if we continue to the second question, it means order-amt > 10. In this context, the second question asks whether order-amt <= 30. That’s how we’re capturing “between”-ness. Do Now! How might you modify the above code to build the “between 10 and 30” requirement explicitly into the question for the 8 case? Remember the and operator on booleans? We can use that to capture “between” relationships, as follows: (order-amt > 10) and (order-amt <= 30) Do Now! Why are there parentheses around the two comparisons? If you replace order-amt with a concrete value (such as 20) and leave off the parenthesis, what happens when you evaluate this expression in the interactions pane? Here is what add-shipping look like with the and included: fun add-shipping(order-amt :: Number) -> Number: doc: "add shipping costs to order total" if order-amt <= 10: order-amt + 4 else if (order-amt > 10) and (order-amt <= 30): order-amt + 8 else: order-amt + 12 end where: add-shipping(10) is 10 + 4 add-shipping(3.95) is 3.95 + 4 add-shipping(20) is 20 + 8 add-shipping(10.01) is 10.01 + 8 add-shipping(30) is 30 + 8 add-shipping(32) is 32 + 12 end Both versions of add-shipping support the same examples. Are both correct? Yes. And while the first part of the second question (order-amt > 10) is redundant, it can be helpful to include such conditions for three reasons: 1. They signal to future readers (including ourselves!) the condition covering a case. 2. They ensure that if we make a mistake in writing an earlier question, we won’t silently get surprising output. 3. They guard against future modifications, where someone might modify an earlier question without realizing the impact it’s having on a later one. Exercise An online-advertising firm needs to determine whether to show an ad for a skateboarding park to website users. Write a function show-ad that takes the age and haircolor of an individual user and returns true if the user is between the ages of 9 and 18 and has either pink or purple hair. Try writing this two ways: once with if expressions and once using just boolean operations. Responsible Computing: Harms from Reducing People to Simple Data Assumptions about users get encoded in even the simplest functions. The advertising exercise shows an example in which a decision gets made on the basis of two pieces of information about a person: age and haircolor. While some people might stereotypically associate skateborders with being young and having colored hair, many skateborders do not fit these criteria and many people who fit these criteria don’t skateboard. While real programs to match ads to users are more sophisticated than this simple function, even the most sophisticated advertising programs boil down to tracking features or information about individuals and comparing it to information about the content of ads. A real ad system would differ in tracking dozens (or more) of features and using more advanced programming ideas than simple conditionals to determine the suitability of an ad (we’ll discuss some of these later in the book). This example also extends to situations far more serious than ads: who gets hired, granted a bank loan, or sent to or released from jail are other examples of real systems that depend on comparing data about individuals with criteria maintained by a program. From a social responsibility perspective, the questions here are what data about individuals should be used to represent them for processing by programs and what stereotypes might those data encode. In some cases, individuals can be represented by data without harm (a university housing office, for examples, stores student ID numbers and which room a student is living in). But in other cases, data about individuals get interpreted in order to predict something about them. Decisions based on those predictions can be inaccurate and hence harmful. ##### 3.4.5Evaluating by Reducing Expressions In How Functions Evaluate, we talked about how Pyret reduces expressions and function calls to values. Let’s revisit this process, this time expanding to consider if-expressions. Suppose we want to compute the wages of a worker. The worker is paid$10 for every hour up to the first 40 hours, and is paid $15 for every extra hour. Let’s say hours contains the number of hours they work, and suppose it’s 45: hours = 45 Suppose the formula for computing the wage is if hours <= 40: hours * 10 else if hours > 40: (40 * 10) + ((hours - 40) * 15) end Let’s now see how this results in an answer, using a step-by-step process that should match what you’ve seen in algebra classes (the steps are described in the margin notes to the right): The first step is to substitute the hours with 45. if 45 <= 40: 45 * 10 else if 45 > 40: (40 * 10) + ((45 - 40) * 15) end Next, the conditional part of the if expression is evaluated, which in this case is false. => if false: 45 * 10 else if 45 > 40: (40 * 10) + ((45 - 40) * 15) end Since the condition is false, the next branch is tried. => if 45 > 40: (40 * 10) + ((45 - 40) * 15) end Pyret evaluates the question in the conditional, which in this case produces true. => if true: (40 * 10) + ((45 - 40) * 15) end Since the condition is true, the expression reduces to the body of that branch. After that, it’s just arithmetic. => (40 * 10) + ((45 - 40) * 15) => 400 + (5 * 15) => 475 This style of reduction is the best way to think about the evaluation of Pyret expressions. The whole expression takes steps that simplify it, proceeding by simple rules. You can use this style yourself if you want to try and work through the evaluation of a Pyret program by hand (or in your head). ##### 3.4.6Composing Functions We started this chapter wanting to account for shipping costs on an order of pens. So far, we have written two functions: • pen-cost for computing the cost of the pens • add-shipping for adding shipping costs to a total amount What if we now wanted to compute the price of an order of pens including shipping? We would have to use both of these functions together, sending the output of pen-cost to the input of add-shipping. Do Now! Write an expression that computes the total cost, with shipping, of an order of 10 pens that say "bravo". There are two ways to structure this computation. We could pass the result of pen-cost directly to add-shipping: add-shipping(pen-cost(10, "bravo")) Alternatively, you might have named the result of pen-cost as an intermediate step: pens = pen-cost(10, "bravo") add-shipping(pens) Both methods would produce the same answer. ##### 3.4.6.1How Function Compositions Evaluate Let’s review how these programs evaluate in the context of substitution and the directory. We’ll start with the second version, in which we explicitly name the result of calling pen-cost. Evaluating the second version: At a high level, Pyret goes through the following steps: • Substitute 10 for num-pens and "bravo" for message in the body of pen-cost, then evaluate the substituted body • Store pens in the directory, with a value of 3.5 • As a first step in evaluating add-shipping(pens), look up the value of pens in the directory • Substitute 3.5 for order-amt in the body of add-shipping then evaluate the resulting expression, which results in 7.5 Evaluating the first version: As a reminder, the first version consisted of a single expression: add-shipping(pen-cost(10, "bravo")) • Since arguments are evaluated before functions get called, start by evaluating pen-cost(10, "bravo") (again using substitution), which reduces to 3.5 • Substitute 3.5 for order-amt in the body of add-shipping then evaluate the resulting expression, which results in 7.5 Do Now! Contrast these two summaries. Where do they differ? What about the code led to those differences? The difference lies in the use of the directory: the version that explicitly named pens uses the directory. The other version doesn’t use the directory at all. Yet both approaches lead to the same result, since the same value (the result of calling pen-cost) gets substituted into the body of add-shipping. This analysis might suggest that the version that uses the directory is somehow wasteful: it seems to take more steps just to end up at the same result. Yet one might argue that the version that uses the directory is easier to read (different readers will have different opinions on this, and that’s fine). So which should we use? Use whichever makes more sense to you on a given problem. There will be times when we prefer each of these styles. Furthermore, it will turn out (once we’ve learned more about nuances of how programs evaluate) that the two versions aren’t as different as they appear right now. ##### 3.4.6.2Function Composition and the Directory Let’s try one more variation on this problem. Perhaps seeing us name the intermediate result of pen-cost made you wish that we had used intermediate names to make the body of pen-cost more readable. For example, we could have written it as: fun pen-cost(num-pens :: Number, message :: String) -> Number: doc: total cost for pens, each 25 cents plus 2 cents per message character message-cost = (string-length(message) * 0.02) num-pens * (0.25 + message-cost) where: ... end Do Now! Write out the high level steps for how Pyret will evaluate the following program using this new version of pen-cost: pens = pen-cost(10, "bravo") add-shipping(pens) Hopefully, you made two entries into the directory, one for message-cost inside the body of pen-cost and one for pens as we did earlier. Do Now! Consider the following program. What result do you think Pyret should produce? pens = pen-cost(10, "bravo") cheap-message = (message-cost > 0.5) add-shipping(pens) Using the directory you envisioned for the previous activity, what answer do you think you will get? Something odd is happening here. The new program tries to use message-cost to define cheap-message. But the name message-cost doesn’t appear anywhere in the program, unless we peek inside the function bodies. But letting code peek inside function bodies doesn’t make sense: you might not be able to see inside the functions (if they are defined in libraries, for example), so this program should report an error that message-cost is undefined. Okay, so that’s what should happen. But our discussion of the directory suggests that both pens and message-cost will be in the directory, meaning Pyret would be able to use message-cost. What’s going on? This example prompts us to explain one more nuance about the directory. Precisely to avoid problems like the one illustrated here (which should produce an error), directory entries made within a function are local (private) to the function body. When you call a function, Pyret sets up a local directory that other functions can’t see. A function body can add or refer to names in either its local, private directory (as with message-cost) or the overall (global) directory (as with pens). But in no case can one function call peek inside the local directory for another function call. Once a function call completes, its local directory disappears (because nothing else would be able to use it anyway). ##### 3.4.7Nested Conditionals We showed that the results in if-expressions are themselves expressions (such as order-amt + 4 in the following function): fun add-shipping(order-amt :: Number) -> Number: doc: "add shipping costs to order total" if order-amt <= 10: order-amt + 4 else: order-amt + 8 end end The result expressions can be more complicated. In fact, they could be entire if-expressions!. To see an example of this, let’s develop another function. This time, we want a function that will compute the cost of movie tickets. Let’s start with a simple version in which tickets are $10 apiece. fun buy-tickets1(count :: Number) -> Number: doc: "Compute the price of tickets at $10 each" count * 10 where: buy-tickets1(0) is 0 buy-tickets1(2) is 2 * 10 buy-tickets1(6) is 6 * 10 end Now, let’s augment the function with an extra parameter to indicate whether the purchaser is a senior citizen who is entitled to a discount. In such cases, we will reduce the overall price by 15%. fun buy-tickets2(count :: Number, is-senior :: Boolean) -> Number: doc: Compute the price of tickets at$10 each with senior discount of 15% if is-senior == true: count * 10 * 0.85 else: count * 10 end where: buy-tickets2(0, false) is 0 buy-tickets2(0, true) is 0 buy-tickets2(2, false) is 2 * 10 buy-tickets2(2, true) is 2 * 10 * 0.85 buy-tickets2(6, false) is 6 * 10 buy-tickets2(6, true) is 6 * 10 * 0.85 end There are a couple of things to notice here: • The function now has an additional parameter of type Boolean to indicate whether the purchaser is a senior citizen. • We have added an if expression to check whether to apply the discount. • We have more examples, because we have to vary both the number of tickets and whether a discount applies. Now, let’s extend the program once more, this time also offering the discount if the purchaser is not a senior but has bought more than 5 tickets. Where should we modify the code to do this? One option is to first check whether the senior discount applies. If not, we check whether the number of tickets qualifies for a discount: fun buy-tickets3(count :: Number, is-senior :: Boolean) -> Number: doc: Compute the price of tickets at $10 each with discount of 15% for more than 5 tickets or being a senior if is-senior == true: count * 10 * 0.85 else: if count > 5: count * 10 * 0.85 else: count * 10 end end where: buy-tickets3(0, false) is 0 buy-tickets3(0, true) is 0 buy-tickets3(2, false) is 2 * 10 buy-tickets3(2, true) is 2 * 10 * 0.85 buy-tickets3(6, false) is 6 * 10 * 0.85 buy-tickets3(6, true) is 6 * 10 * 0.85 end Notice here that we have put a second if expression within the else case. This is valid code. (We could have also made an else if here, but we didn’t so that we could show that nested conditionals are also valid). Exercise Show the steps through which this function would evaluate in a situation where no discount applies, such as buy-tickets3(2, false). Do Now! Look at the current code: do you see a repeated computation that we might end up having to modify later? Part of good code style is making sure that our programs would be easy to maintain later. If the theater changes its discount policy, for example, the current code would require us to change the discount (0.85) in two places. It would be much better to have that computation written only one time. We can achieve that by asking which conditions lead to the discount applying, and writing them as the check within just one if expression. Do Now! Under what conditions should the discount apply? Here, we see that the discount applies if either the purchaser is a senior or more than 5 tickets have been bought. We can therefore simplify the code by using or as follows (we’ve left out the examples because they haven’t changed from the previous version): fun buy-tickets4(count :: Number, is-senior :: Boolean) -> Number: doc: Compute the price of tickets at$10 each with discount of 15% for more than 5 tickets or being a senior if (is-senior == true) or (count > 5): count * 10 * 0.85 else: count * 10 end end This code is much tighter, and all of the cases where the discount applies are described together in one place. There are still two small changes we want to make to really clean this up though. Do Now! Take a look at the expression is-senior == true. What will this evaluate to when the value of is-senior is true? What will it evaluate to when the value of is-senior is false? Notice that the == true part is redundant. Since is-senior is already a boolean, we can check its value without using the == operator. Here’s the revised code: fun buy-tickets5(count :: Number, is-senior :: Boolean) -> Number: doc: Compute the price of tickets at $10 each with discount of 15% for more than 5 tickets or being a senior if is-senior or (count > 5): count * 10 * 0.85 else: count * 10 end end Notice the revised question in the if expression. As a general rule, your code should never include == true. You can always take that out and just use the expression you were comparing to true. Do Now! What do you write to eliminate == false? For example, what might you write instead of is-senior == false? Finally, notice that we still have one repeated computation: the base cost of the tickets (count * 10): if the ticket price changes, it would be better to have only one place to update that price. We can clean that up by first computing the base price, then applying the discount when appropriate: fun buy-tickets6(count :: Number, is-senior :: Boolean) -> Number: doc: Compute the price of tickets at$10 each with discount of 15% for more than 5 tickets or being a senior base = count * 10 if is-senior or (count > 5): base * 0.85 else: base end end ##### 3.4.8Recap: Booleans and Conditionals With this chapter, our computations can produce different results in different situations. We ask questions using if-expressions, in which each question or check uses an operator that produces a boolean. • There are two Boolean values: true and false. • A simple kind of check (that produces a boolean) compares values for equality (==) or inequality(<>). Other operations that you know from math, like < and >=, also produce booleans. • We can build larger expressions that produce booleans from smaller ones using the operators and, or, not. • We can use if expressions to ask true/false questions within a computation, producing different results in each case. • We can nest conditionals inside one another if needed. • You never need to use == to compare a value to true or false: you can just write the value or expression on its own (perhaps with not to get the same computation).
Publications and Preprints Sums of the digits in bases $2$ and $3$ by Jean-Marc Deshouillers, Laurent Habsieger, Shanta Laishram and Bernard Landreau Let $b \ge 2$ be an integer and let $s_b(n)$ denote the sum of the digits of the representation of an integer $n$ in base $b$. For sufficiently large $N$, one has $$\notag \Card \{n \le N : \left|s_3(n) - s_2(n)\right| \le 0.1457205 \log n \} \, > \, N^{0.970359}.$$ The proof only uses the separate (or marginal) distributions of the values of $s_2(n)$ and $s_3(n)$. isid/ms/2016/13 [fulltext]
610 articles – 3061 Notices  [english version] HAL : in2p3-00164743, version 1 XVth International Conference on Electromagnetic Isotope Separators and Techniques related to their Applications (EMIS2007), Deauville : France (2007) A selective and compact 1+ $\rightarrow$ n+ ECRIS (24/06/2007) The ISAC Radioactive Ion Beams (RIB) facility can deliver RIB to a linear accelerator composed of a 4-rod RFQ followed by a linear accelerator which can provide beams from A/q ≤ 30 amu in an energy range from 0.15 to 1.5 A*MeV. An accelerator extension allows us to extend the mass range to A=150 amu and the energy up to 6.5 A*MeV. The minimal charge state requirement for A = 150 is q=5 which can be accomplished with an on-line ECRIS like MISTIC which is under test at ISAC-TRIUMF. By coupling a selective 1+ source like a the resonant laser ionization via a RFQ mass analyzer to an ECRIS one can obtain a RIB with high purity and high charge state can be obtained. Thème(s) : Physique/Physique/Physique des accélérateurs in2p3-00164743, version 1 http://hal.in2p3.fr/in2p3-00164743 oai:hal.in2p3.fr:in2p3-00164743 Contributeur : Michel Lion <> Soumis le : Lundi 23 Juillet 2007, 16:02:24 Dernière modification le : Lundi 23 Juillet 2007, 17:59:29
# How to add a superscript in text mode I'm writing about C* algebras and I'm trying to write *-strings efficiently. I managed to define C* like this: \newcommand{\Cstar}{C\textsuperscript{*}} While I have to invoke it as \Cstar{} to prevent it from sticking to the next word, I have had trouble writing a command to add * to any word (such as morphism or isometry). I tried doing this: \newcommand{\star}[1]{#1\textsuperscript{*}} without luck. Is this possible, and should I be doing this with LaTex, or is it something I should be doing with my editor's macros? • Welcome to TeX.SX! Note that it is usually best practice here to include a small example document that does only include the bare necessities to show what you're trying so far (e.g. \documentclass{article}\newcommand\star[1]{#1\textsuperscript{*}}\begin{document}\star{Foo}\end{document} would suffice here). – Skillmon Apr 8 at 19:59 • The command \star is already defined in LaTeX. You can try \Star instead. – Phelype Oleinik Apr 8 at 20:00 ## 1 Answer In my opinion, the asterisk should be always in the upright font, independently of the context. Besides, \textsuperscript{*} would place the asterisk too high, see the last line in the image below. Also C* should probably always appear upright, but you may decide otherwise. Redefining \star could be safe in your context, but be aware that \star is the name of a symbol, namely ⋆, and you may want to save it under another name in case you decide to use it. \documentclass{article} \usepackage{amsmath} \newcommand{\Star}[1]{#1\ensuremath{^*}\kern-\scriptspace} \newcommand{\CStar}{\Star{\ensuremath{\mathrm{C}}}} \begin{document} % the commands in upright text We deal with \CStar-algebras, with \Star{morphisms} and \Star{isometries}. % the commands in italics context, such as theorems \textit{We deal with \CStar-algebras, with \Star{morphisms} and \Star{isometries}.} % with \textsuperscript{*} \textit{We deal with C\textsuperscript{*}-algebras, with morphisms\textsuperscript{*} and isometries\textsuperscript{*}.} \end{document} • Thank you, the upgright text is just what I was looking for. – user20402 Apr 8 at 21:09
17 Population Models Subdivision occurs when individuals are spatially, ecologically, or temporally separated causing the relative probability of mating among individuals to be non-uniformly spread across all individuals. This is the natural state for most species—they are not completely randomly mating—and as such this causes changes in how genetic variation is partitioned. In this Chapter we examine subdivision and how we quantify its strength. Before we descend into discussions on models describing population subdivision, we should probably clarify some jargon that is used commonly in describing these systems. Migration: Migration is the movement of individuals among spatial locations. It does not, strictly speaking, denote any relation to mating. Bird, mammals, and even insects migrate in response to annual seasonality, though the process of movement towards the equator or more reasonable habitat for the particular season does not necessarily influence population genetic structure. Gene Flow: Gene flow is a process of genetic material moving between locales or populations that results in modification of standing genetic variation. Gene flow is commonly denoted as a rate (e.g., a fraction bound between 0 and 1) and has direct influence on the allelic and/or genotypic composition of the population. That is not to say that migration may result in subsequent mating, contributing to the population structure, it just does not necessarily require it. 17.1 Models of Partition If individuals are partitioned into different groups, the expectations for allele and genotype frequencies depend upon the way in which populations are connected through gene flow. There are some classic models associated with population connectivity that we will cover in this section, though in reality the ways in which populations are actually connected most likely fall outside the strict conventions for these simple models. What these models do, however, is provide us a with a framework on which to examine observed distributions of genetic variation and to make predictions on future structure given a few assumptions. A network analogy will be used here to describe the connectivity, with populations acting as nodes in the network and gene flow being indicated by edges connecting these nodes. Strictly speaking, this is a weighed graph as the edges have associated with them particular numerical values representing the migration rate between the connected edges. Each population can have its own allele frequency spectra and through time, allele frequencies can change in response to immigration into the population but is not expected to change due to emigration from it, as we are assuming migration likelihood of any individual is statistically independent of allele frequencies. 17.2 Island Mainland Model An island-mainland model is the simplest formulation of a population model. Here, we have a large mainland population and a small island population. For simplicity, define the allele frequencies of these two as $$p_x$$ and $$p_y$$. During each discrete generation, some fraction $$m$$ of the individuals from the mainland arrive into the mating population of the individuals on the island. The composition of the island is composed of the $$m$$ fraction of individuals that are immigrants and the $$(1-m)$$ fraction that were already on the island. At the next generation, the allele frequencies of the mainland population remain the same (we are assuming it is large enough such that a loss of migrants does not influence allelic or genotypic frequencies), whereas the island population is comprised of $$m$$ percent of immigrants, whose frequencies are $$p_x$$, and $$(1-m)$$ fraction of residents whose frequency was in the last generation $$p_y$$. Taken together, their frequencies are: $p_{y,t+1} = (1-m)p_{y,t} + mp_{x,t}$ From this formulation, it is easy to deduce that after a sufficient number of generations, the allele frequencies of both mainland and island will be the same. The island will eventually have the same frequencies as the mainland, though the amount of time it takes depends upon the difference in allele frequencies, $$\delta p = p_x - p_y$$, and the migration rate, $$m$$, to the island population. Since there is no migration from the island to the mainland, the equilibrium frequency will be $$p_x$$. Here is a bit of code that shows the effects that different migration rates may have. migration_rates <- c(.01,.05,.10,.15) results <- data.frame(m=rep(migration_rates,each=100), Generation=rep(1:100,times=4), p=NA) for( m in migration_rates) { px <- 0 py <- 1 results$p[ results$m==m ] <- py for( t in 2:100){ p.0 <- results$p[ results$m==m & results$Generation == (t-1) ] p.1 <- (1-m)*p.0 + px*m results$p[ results$m==m & results$Generation == t ] <- p.1 } } results$m <- factor(results$m) So for even low migration, say $$m=0.01$$, allele frequencies may change rather quickly due to immigration. 17.3 The Island Model The next most complex model is one where every population is exchanging migrants. This n-island model was first introduced by Sewell Wright (1931). In this one, all populations are connected via a constant migration rate. An allele that arrises in one population through mutation can potentially be dispersed to any other population in a single generation, the likelihood of which is determined by the migration rate. The stability of this system is quite high. All populations are sharing migrants and as a consequence, all populations will thus converge on a unified allele frequency, one defined by the global average allele frequency, $$\bar{p}$$. The length of time it takes to get to that equilibrium point is determined by how far away from the global mean the population is and the rate at which migrants are distributed. There are two ways to model this kind of system, one more simple than the other. The simplest way is to consider that the migrants are a giant migrant pool and from that they are distributed to the populations. For example, if $$\bar{p}$$ is the global average allele frequency, the migrant pool could be considered to also have this allele frequency. If you believe this is a reasonable approximation, then the allele frequencies at the next generation, say for population $$X$$ in Figure 17.3, are: $p_{x,t+1} = (1-m)p_{x,t} + m\bar{p}$ In a general context, we can estimate what the allele frequencies will be for an arbitrary time in the future if we know: - The starting allele frequencies, $$p_0$$ - The migration rate, $$m_{i,j}$$ - How many generations migration has been happening, $$t$$. This is estimated, following the a similar format that we found in both mutation and inbreeding, as: $p_t = \bar{p} + (p_0 - \bar{p})(1-m)^t$ We can examine the change in allele frequencies through time numerically. Consider a population that starts out at $$p_X = 0.1$$ but is receiving migrants at a rate of $$m=0.05$$ each generation. With these parameters, we can set up an expectation of allele frequencies for, say, $$100$$ generations, as: T <- 100 pX <- rep(NA,T) pX[1] <- 0.1 pbar <- 0.5 m <- 0.05 for( t in 2:T) pX[t] <- pbar + (pX[1]-pbar)*(1-m)^t df <- data.frame( Generation = 1:T, Frequency = pX) Through time, the allele frequencies change systematically, tending towards the global allele frequency defined by all populations. Some salient assumptions for this model include: • Generations do not overlap so that we can use a difference equation approach for understanding connectivity. • Populations are discrete in that there are breaks between populations. • Migration rates are constant through both space and time. • Migration is symmetric in both directions. This approach may not be the most realistic one, but it does outline a general way in which we can predict allele frequencies through time. 17.4 Stepping Stone Models A slightly more realistic model was introduced by Kimura & Weiss (1964) who coined the term ‘stepping-stone’ model to indicate one that takes into consideration the spatial arrangement of populations as it describes connectivity. Their model consisted of an infinite length of populations, all connected with a unified migration rate, $$m$$. For any population along this continuum, the frequency of alleles at the next generation was dependent upon the following: • The frequency at the current generation for that population, $$p_i$$. • The rate at which migrants were arriving at the population from each side. The assumed that the total migration rate was $$m$$ and as such they were receiving migrants at a rate of $$\frac{m}{2}$$ each generation, from each neighbor. • Each neighbor may have its own initial allele frequency and would change through time with continued exchange of migrants. • There is some ‘background’ level of migration that is going on, consisting of some rate of migrants, $$m_{\infty}$$, whose allele frequencies are denoted as $$p_{\infty}$$. This part of the equation may be a bit difficult to parameterize (or said another way, it may be a nice ‘buffer’ to have in your model to help explain things you can’t directly explain). • Each population has some stochastic change in allele frequencies each generation, denoted as $$\eta_i$$, which may be due to genetic drift and other local processes. The expectation for this ‘error’ term is $$E[\eta_i] = 0$$ and its variance is $$E[\eta_i^2] = \frac{p_i(1-p_i)}{2N_e}$$, which is consistent with what we’ve seen for drift already. Taken as a whole, their formulation is: $p_{i,t+1} = (1-m_1-m_\infty)*p_i + \frac{m}{2}(p_{i-1,t} + p_{i+1,t}) + m_\infty\bar{p} + \eta_i$ Extending on this basic model, they also derived the expectations for a 2-dimensional connectivity model where populations were arrayed on a grid with rows and columns. They also briefly entertained a 3-dimensional model as well. For each of these models, the estimated the expectation of the decrease in genetic correlation between pairs of populations with increases in distance between populations. Perhaps not surprisingly, the correlation is higher in 1-dimensional than in 2-dimensional arrangements. It is also higher in 2-dimensional than in 3-dimensional ones. Their paper is an excellent introduction to the mathematics underlying connectivity in discrete space and should be examined in detail by those who are interested in modeling this in a more complete fashion. However, for completeness here, we will look to a more general approach to understanding connectivity and changes in allele frequencies based upon a specified ‘network’ of connectivity. 17.5 The General Model A more broad approach that can be applied to all types of connectivity models is one that allows one to specify the underlying connectivity model and then estimate allele frequency changes on that model. Instead of having a single migration rate for all populations or a rigid arrangement of populations, if you can specify the topology of a connectivity network, you can use the following approach to estimate allele frequency changes. We will start with the simple case of three populations, each with their own frequencies and connected by individual migration rates. In this system, we can consider the frequencies for population X as being derived by the frequencies of all populations with which it exchanges migrants as well as their individual rates. Here is an example from the diagram below. $p_{x,t+1} = m_{x \leftrightarrow y}p_{y,t} + m_{x \leftrightarrow z}p_{z,t} + [1 - (m_{x \leftrightarrow y} + m_{x \leftrightarrow z})]p_x$ The current frequency is determined by the frequencies of all connected populations (both $$Y$$ and $$Z$$ in this case) at the previous generation and their individual migration rates (these are the immigrants) and the residents ($$1 - m_{all\;migration\;to\;this\;population}$$) and its previous allele frequency. In a similar fashion, the other populations are: $p_{y,t+1} = m_{x \leftrightarrow y}p_{x,t} + m_{y \leftrightarrow z}p_{z,t} + [1 - (m_{x \leftrightarrow y} + m_{y \leftrightarrow z})]p_y$ and $p_{z,t+1} = m_{x \leftrightarrow z}p_{x,t} + m_{y \leftrightarrow z}p_{y,t} + [1 - (m_{x \leftrightarrow z} + m_{y \leftrightarrow z})]p_z$ In R, we can iterate through this and see these behaviors. Here we look at three populations, each starting with different allele frequencies and estimate allele frequencies for a period of T = 75 generations. T <- 75 pX <- rep(NA,T) pY <- rep(NA,T) pZ <- rep(NA,T) pX[1] <- 0.1 pY[1] <- 0.5 pZ[1] <- 0.9 These populations exchange migrants with different basal rates. For this example, we will assume the exchange of migrants is symmetric, though it is not necessary to do so, and you can see how the following code is extended to account for asymmetry. mXY <- 0.04 mXZ <- 0.02 mYZ <- 0.08 Then, the simulation is run across the T generations and at each iteration, the frequencies of each population is updated based upon these migration rates and the frequencies of the populations from which the immigrants come. for( gen in 2:T){ pX[gen] <- mXY*pY[gen-1] + mXZ*pZ[gen-1] + ( 1 - (mXY+mXZ))*pX[gen-1] pY[gen] <- mXY*pX[gen-1] + mYZ*pZ[gen-1] + ( 1 - (mXY+mYZ))*pY[gen-1] pZ[gen] <- mXZ*pX[gen-1] + mYZ*pY[gen-1] + ( 1 - (mYZ+mXZ))*pZ[gen-1] } df <- data.frame( Generation=rep(1:T,times=3)) df$Frequency=c(pX,pY,pZ) df$Population <- rep(c("X","Y","Z"), each=T) Through time, we see that the allele frequencies all converge on the global allele frequency mean( c(pX,pY,pZ)) ## [1] 0.5 There are a couple of points here to be made. • The rate at which allele frequencies change depends upon the $$\delta p = p_i - \bar{p}$$ and the migration rate. The approach to $$\bar{p}$$ was faster for population $$Z$$ than population $$X$$ as the migration rates to and from $$X$$ are overall lower than from $$Z$$. • Some situations, depending upon the configuration of the connectivity ‘network,’ can result in reversals of direction in allele frequency change. Look at the population $$Y$$. Its allele frequencies started out at $$\bar{p}$$ and initially increased towards those of population $$Z$$. This is because the relative rate of migration between $$Y$$ & $$Z$$ is greater than between $$Y$$ & $$X$$. These observations suggest that when you are doing simulations, you need to wait a bit to allow the dynamics of the system to ‘burn in’ a bit. The equilibrium point we are interested in seeing is only attainable after the entire system has gone through many generations and the idiosyncrasies have been thoroughly iterated through. 17.6 General Strategies In dealing with these kinds of dynamical systems, there are some important things that you need to consider when attempting to understand the underlying dynamics. These are general rules, though there are always exceptions. However, if you follow them, you will probably have a much easier time with the calculations. 1. Draw out the network if you can. If it is too complicated, then at least try to capture the main flows through the network, graphically. Visualization of network structure is very helpful for understanding where you think things should be going. 2. In most systems, allele frequencies will tend towards $$\bar{p}$$ for all populations given enough time. In cases where populations are isolated (e.g., $$m=0$$), or migration is unidirectional (as in the mainland/island model), you may have components of the overall topology that will never stabilize on $$\bar{p}$$. 3. Once specified, iterate through each generation slowly, checking to see if you are parameterizing the system correctly. There are an infinite number of ways for you to not get it correct via a typo or other programatic error. Always ‘sanity check’ your results. With these strategies in mind, you should be able to attack any population arrangement.
# What is RO in physics? RO occurs when a solution is pressurized against a solvent-selective membrane, and the applied pressure exceeds the osmotic pressure difference across the membrane. Water is the solvent in most existing reverse osmosis applications; the solutes may be salts or organic compounds. ## How do you write rho? Rho (uppercase/lowercase Ρ ρ) is the 17th letter of the Greek alphabet. It is used to represent the “r” sound in Ancient and Modern Greek. In the system of Greek numerals, it has a value of 100. ## What is rho in density? Density (volumetric mass density or specific mass), is the substance’s mass per unit of volume. The symbol most often used for density is ρ (the lower case Greek letter rho), although the Latin letter D can also be used. ## What is RO math? The Greek letter ρ (rho) is used in math as a variable and in physics to represent density. ## What is RO in chemistry? Reverse osmosis ( RO) is a water purification process that removes ions, unwanted molecules and larger particles from drinking water using a partially permeable membrane. As a result, the solute is kept on the membrane’s pressurised side and the pure solvent is allowed to pass to the other side. ## What is rho in physics class 11? The greek letter “rho” represents the resistivity of a material. The resistivity of a material is the resistance of a wire of that material of unit length and unit cross-sectional area. The unit for resistivity is the ohm-metre. ## Why is density denoted by rho? One possible explanation is that the Greek word for “flow” is “ροή”, which starts with ρ. Density is related to the volumetric flow rate (see Volumetric flow rate , ) , so this is not too far fetched. ## What does rho stand for Greek? The Greek letter rho is used to stand for “density”, “resistivity”, in physics. It is also used in “a rho meson” which is a short-lived hadronic particle in particle physics. ## Is momentum rho or P? p for momentum Momentum is commonly represented by the letter “p”. I’ve never seen the greek letter rho used for momentum. ## What is rho for water? Water as the reference with its highest density at 3.98 °C is ρ = 1 g/cm3. The correct SI unit is ρ = 1000 kg/m3. ## What is rho M V? In the question we are given the relation between density, mass and volume as, $\rho =\dfracmV$, where ‘$\rho$’ is the density, ‘m’ is the mass and ‘V’ is the volume. We need to find the coefficient of volume of expansion of the liquid. ## Is Omicron used in mathematics? Omicron is the 15th letter of the Greek alphabet. In the system of Greek numerals it has a value of 70. It is rarely used in mathematics because it is indistinguishable from the Latin letters O, o and easily confused with the digit 0. This letter is derived from the Phoenician letter ayin . ## Why are Greek letters used in physics? Greek letters are used in mathematics, science, engineering, and other areas where mathematical notation is used as symbols for constants, special functions, and also conventionally for variables representing certain quantities. ## What is the Phi symbol? Phi (/faɪ/; uppercase Φ, lowercase φ or ϕ; Ancient Greek: ϕεῖ pheî [pʰéî̯]; Modern Greek: φι fi [fi]) is the 21st letter of the Greek alphabet. ## How does an RO work? Reverse Osmosis (RO) is a water treatment process that removes contaminants from water by using pressure to force water molecules through a semipermeable membrane. During this process, the contaminants are filtered out and flushed away, leaving clean, delicious drinking water. ## Why RO is extensively used? The RO process is used extensively in desalination due to its relatively low energy consumption. In 2011, the RO process was used in 66% of desalination pants, according to the International Desalination Association (IDA). ## What is RO membrane? Reverse osmosis (RO) membranes play a key role in wastewater treatment units as they are used to remove salts and other pollutants effectively. RO membrane performance is affected by many different factors such as feed characteristics and operational parameters during operation. ## What is rho in physics pressure? Non Si Units for pressureEdit ρ (rho) is the density of the fluid (i.e., the practical density of fresh water is 1000 kg/m3); g is the acceleration due to gravity (approximately 9.81 m/s2 on earth’s surface); h is the height of the fluid column (in metres). ## Is rho a constant? In some numerical methods “rho” is constant but in some others as WCSPH method the “rho” is not constant and the pressure term is calculated with Tait equation which is function of two rhos, the principle rho and new rho. [ML2T−2A−3] ## Is rho a SI unit of density? Density is the mass per unit of volume. … So the derived SI unit for density is kgm3 . Explanation :The SI unit of electrical resistivity is the ohm⋅metre (Ω⋅m). It is commonly represented by the Greek letter ρ, rho. ## What is the unit for density? Density has the units of mass divided by volume such as grams per centimeters cube (g/cm3) or kilograms per liter (kg/l). ## What is SI unit of density? Though the SI unit of density is kg/m³, for convenience we use g/cm³ for solids, g/ml for liquids, and g/L for gases. Density can be explained as the relationship between the mass of the substance and the volume it takes up.
# Homework questions thus far??? Section 4.10? 5.1? 5.2? ## Presentation on theme: "Homework questions thus far??? Section 4.10? 5.1? 5.2?"— Presentation transcript: Homework questions thus far??? Section 4.10? 5.1? 5.2? The Definite Integral Chapters 7.7, 5.2 & 5.3 January 30, 2007 Estimating Area vs Exact Area Pictures Riemann sum rectangles, ∆t = 4 and n = 1: Better Approximations Trapezoid Rule uses straight lines Trapezoidal Rule Better Approximations The Trapezoid Rule uses small lines Next highest degree would be parabolas… Simpson’s Rule Mmmm… parabolas… Put a parabola across each pair of subintervals: Simpson’s Rule Mmmm… parabolas… Put a parabola across each pair of subintervals: So n must be even! Simpson’s Rule Formula Like trapezoidal rule Simpson’s Rule Formula Divide by 3 instead of 2 Simpson’s Rule Formula Interior coefficients alternate: 4,2,4,2,…,4 Simpson’s Rule Formula Second from start and end are both 4 Simpson’s Rule Uses Parabolas to fit the curve Where n is even and ∆x = (b - a)/n S 2n =(T n + 2M n )/3 Use Simpson’s Rule to Approximate the definite integral with n = 4 g(x) = ln[x]/x on the interval [3,11] Use T 4. Runners: A radar gun was used to record the speed of a runner during the first 5 seconds of a race (see table) Use Simpsons rule to estimate the distance the runner covered during those 5 seconds. Definition of Definite Integral: If f is a continuous function defined for a≤x≤b, we divide the interval [a,b] into n subintervals of equal width ∆x=(b-a)/n. We let x 0 (=a),x 1,x 2,…,x n (=b) be the endpoints of these subintervals and we let x 1 *, x 2 *, … x n * be any sample points in these subintervals so x i * lies in the ith subinterval [x i-1,x i ]. Then the Definite Integral of f from a to b is: Express the limit as a Definite Integral Express the Definite Integral as a limit Properties of the Definite Integral Properties of the Integral 1) 2) = 0 3) for “c” a constant Properties of the Definite Integral Given that:  Evaluate the following: Properties of the Definite Integral Given that:  Evaluate the following: Given the graph of f, find: Evaluate: Integral Defined Functions Let f be continuous. Pick a constant a. Define: Integral Defined Functions Let f be continuous. Pick a constant a. Define: Notes: lower limit a is a constant. Integral Defined Functions Let f be continuous. Pick a constant a. Define: Notes: lower limit a is a constant. Variable is x: describes how far to integrate. Integral Defined Functions Let f be continuous. Pick a constant a. Define: Notes: lower limit a is a constant. Variable is x: describes how far to integrate. t is called a dummy variable; it’s a placeholder Integral Defined Functions Let f be continuous. Pick a constant a. Define: Notes: lower limit a is a constant. Variable is x: describes how far to integrate. t is called a dummy variable; it’s a placeholder F describes how much area is under the curve up to x. Example Let. Let a = 1, and. Estimate F(2) and F(3). Example Let. Let a = 1, and. Estimate F(2) and F(3). Where is increasing and decreasing? is given by the graph below: F is increasing. (adding area) F is decreasing. (Subtracting area) Fundamental Theorem I Derivatives of integrals: Fundamental Theorem of Calculus, Version I: If f is continuous on an interval, and a a number on that interval, then the function F(x) defined by has derivative f(x); that is, F'(x) = f(x). Example Suppose we define. Example Suppose we define. Then F'(x) = cos(x 2 ). Example Suppose we define. Then F'(x) = Example Suppose we define. Then F'(x) = x 2 + 2x + 1. Examples: If f is continuous on [a, b], then the function defined by is continuous on [a, b] and differentiable on (a, b) and Fundamental Theorem of Calculus (Part 1) If f is continuous on [a, b], then the function defined by is continuous on [a, b] and differentiable on (a, b) and Fundamental Theorem of Calculus (Part 1) (Chain Rule) In-class Assignment a. Estimate (by counting the squares) the total area between f(x) and the x- axis. b. Using the given graph, estimate c. Why are your answers in parts (a) and (b) different? 2. 1. Find: First let the bottom bound = 1, if x >1, we calculate the area using the formula for trapezoids: Consider the function f(x) = x+1 on the interval [0,3] Now calculate with bottom bound = 1, and x < 1, : Consider the function f(x) = x+1 on the interval [0,3] So, on [0,3], we have that And F’(x) = x + 1 = f(x) as the theorem claimed! Very Powerful! Every continuous function is the derivative of some other function! Namely:
University of Florida/Egm4313/s12.team11.perez.gp/R3.9 Problem Statement Solve the initial value problem. State which rule you are using. Show each step of your calculation in detail. (K 2011 pg.85 #13) ${\displaystyle 8y''-6y'+y=6\cosh x\!}$ (1) Initial conditions are: ${\displaystyle y(0)=0.2,y'(0)=0.05\!}$ Solution The general solution of the homogeneous ordinary differential equation is ${\displaystyle 8y''-6y'+y=0\!}$ We can use this information to determine the characteristic equation: ${\displaystyle 8\lambda ^{2}-6\lambda +1=0\!}$ And proceeding to find the roots, ${\displaystyle 4\lambda (2\lambda -1)-1(2\lambda -1)=0\!}$ Thus, ${\displaystyle (4\lambda -1)(2\lambda -1)=0\!}$. Solving for the roots, we find that ${\displaystyle \lambda ={\frac {1}{4}},{\frac {1}{2}},\!}$ where the general solution is ${\displaystyle y_{k}=c_{1}e^{{\frac {1}{4}}x}+c_{2}e^{{\frac {1}{2}}x}\!}$. The solution of ${\displaystyle y_{p}\!}$ of the non-homogeneous ordinary differential equation is ${\displaystyle x={\frac {e^{x}+e^{-x}}{2}}\!}$. Using the Sum rule as described in Section 2.7, the above function translates into the following: ${\displaystyle y_{p}=y_{p1}+y_{p2}\!}$, where Table 2.1 tells us that: ${\displaystyle y_{p1}=Ae^{x}\!}$ and ${\displaystyle y_{p2}=Be^{x}\!}$. Therefore, ${\displaystyle y_{p}=Ae^{x}+Be^{-x}\!}$. Now, we can substitute the values (${\displaystyle y_{p},y_{p}',y_{p}''\!}$) into (1) to get: ${\displaystyle 8(Ae^{x}+Be^{-x})-6(Ae^{x}-Be^{-x})+Ae^{x}+Be^{-x}=6({\frac {e^{x}+e^{-x}}{2}})\!}$ ${\displaystyle =3Ae^{x}+15Be^{-x}\!}$ ${\displaystyle =3(e^{x}+e^{-x})\!}$ Now that we have this equation, we can equate coefficients to find that: ${\displaystyle 3A=3\!}$ ${\displaystyle \therefore A=1\!}$ ${\displaystyle B={\frac {3}{15}}={\frac {1}{5}}\ }$ and thus, ${\displaystyle y_{p}=e^{x}+{\frac {1}{5}}e^{-x}\!}$ We find that the general solution is in fact: ${\displaystyle y=y_{k}+y_{p}\!}$ ${\displaystyle y=c_{1}e^{\frac {1}{4}}x+c_{2}e^{\frac {1}{2}}x+e^{x}+3e^{-x}\!}$ whereas the general solution of the given ordinary differential equation is actually: ${\displaystyle y=c_{1}e^{\frac {1}{4}}x+c_{2}e^{\frac {1}{2}}x+e^{x}+{\frac {1}{5}}e^{-x}\!}$ Solving for the initial conditions given and first plugging in ${\displaystyle y(0)=0.2\!}$, we get that: ${\displaystyle 0.2=c_{1}e^{{\frac {1}{4}}(0)}+c_{2}e^{{\frac {1}{2}}(0)}+e^{0}+3e^{(0)}\!}$ ${\displaystyle 0.2=c_{1}e^{(0)}+c_{2}e^{(0)}+e^{(0)}+3e^{(0)}\!}$ ${\displaystyle 0.2=c_{1}+c_{2}+1+{\frac {1}{5}}\!}$ ${\displaystyle \therefore c_{1}+c_{2}=-1\!}$. (2) And now we can determine the first order ODE : ${\displaystyle y'={\frac {1}{4}}c_{1}e^{{\frac {1}{4}}(x)}+{\frac {1}{2}}c_{2}e^{{\frac {1}{2}}(x)}+e^{x}-{\frac {1}{5}}e^{x}\!}$ The second initial condition that was given to us, ${\displaystyle y'(0)=0.05\!}$ can now be plugged in: ${\displaystyle 0.05={\frac {1}{4}}c_{1}e^{{\frac {1}{4}}(0)}+{\frac {1}{2}}c_{2}e^{{\frac {1}{2}}(0)}+e^{(0)}-{\frac {1}{5}}e^{(0)}\!}$ ${\displaystyle 0.05={\frac {1}{4}}c_{1}e^{(0)}+{\frac {1}{2}}c_{2}e^{(0)}+e^{(0)}-{\frac {1}{5}}e^{(0)}\!}$ ${\displaystyle 0.05={\frac {1}{4}}c_{1}+{\frac {1}{2}}c_{2}+1-{\frac {1}{5}}\!}$ ${\displaystyle {\frac {1}{4}}c_{1}+{\frac {1}{2}}c_{2}=-0.75\!}$ ${\displaystyle \therefore c_{1}+2c_{2}=-3\!}$ (3) Once we solve (2) and (3), we can get the values: ${\displaystyle c_{1}=1,c_{2}=-2\!}$. And once we substitute these values, we get the following solution for this IVP: ${\displaystyle y=e^{{\frac {1}{4}}x}-2e^{{\frac {1}{2}}x}+e^{x}+{\frac {1}{5}}e^{-x}\!}$ (K 2011 pg.85 #14) ${\displaystyle y''+4y'+4y=e^{-2x}sin2x\!}$ (1) Initial conditions are: ${\displaystyle y(0)=1,y'(0)=-1.5\!}$ Solution The general solution of the homogeneous ordinary differential equation is ${\displaystyle y''+4y'+4y=0\!}$ We can use this information to determine the characteristic equation: ${\displaystyle \lambda ^{2}+4\lambda +4=0\!}$ And proceeding to find the roots, ${\displaystyle (\lambda +2)(\lambda +2)=0\!}$ Solving for the roots, we find that ${\displaystyle \lambda =-2,-2\!}$ where the general solution is: ${\displaystyle y_{k}=c_{1}e^{-2x}+c_{2}e^{-2x}x\!}$, or: ${\displaystyle y_{k}=(c_{1}+c_{2}x)e^{-2x}\!}$ Now, according to the Modification Rule and Table 2.1 in Section 2.7, we know that we have to multiply by x to get: ${\displaystyle y_{p}=e^{-2x}(Kxcos2x+Mxsin2x)\!}$, since the solution of ${\displaystyle y_{k}\!}$ is a double root of the characteristic equation. We can then derive to get ${\displaystyle y_{p}'\!}$: ${\displaystyle y_{p}'=-2e^{-2x}(Kxcos2x+Mxsin2x)+e^{-2x}(Kcos2x-2Kxsin2x+Msin2x+2Mxcos2x)\!}$ ${\displaystyle y_{p}'=(-2K+2M)e^{-2x}xcos2x+(-2M-2K)e^{-}2xxsin2x+Ke^{-2x}cos2x+Me^{-2x}sinx\!}$ Deriving once again to solve for ${\displaystyle y_{p}''\!}$, we get the following: ${\displaystyle y_{p}''=4e^{-2x}(Kxcos2x+Mxsin2x)-2e^{-2x}(Kcos2x-2Kxsin2x+Msin2x+2Mxcos2x)-2e^{-2x}(Kcos2x-2Kxsin2x+Msin2x+2Mxcos2x)\!}$ ${\displaystyle y_{p}''=4e^{-2x}(Kxcos2x+Mxsin2x)-4e^{-2x}(Kcos2x-2Kxsin2x+Msin2x+2Mxcos2x)+e^{-2x}(-4Ksin2x-4Kxcos2x+4Mcos2x-4Mxsin2x)\!}$ ${\displaystyle y_{p}''=(-4K+4M)e^{-2x}cos2x+(-4K-4M)e^{-2x}sin2x\ }$ Now, we can substitute the values (${\displaystyle y_{p},y_{p}',y_{p}''\!}$) into (1) to get: ${\displaystyle (-4K+4M)e^{-2x}cos2x+(-4M-4K)e^{-2x}sin2x+4((-2K+2M)e^{-2x}xcos2x+(-2M-2K)e^{-2x}xsin2x+Ke^{-2x}cos2x+Me^{-2x}sinx)+4(e^{-2x}(Kxcos2x+Mxsin2x))=e^{2x}sin2x\!}$ ${\displaystyle \therefore (-3K+4M)e^{-2x}cos2x+(-3M-4K)e^{-2x}sin2x=e^{-2x}sin2x\!}$ Now that we have this equation, we can equate coefficients to find that: ${\displaystyle -3K+4M=0\!}$ and ${\displaystyle -4K-3M=1\!}$ and finally discover that: ${\displaystyle M=-{\frac {3}{25}}\!}$ and ${\displaystyle K=-{\frac {4}{25}}\!}$. Plugging in these values in ${\displaystyle y_{p}\!}$, we find that: ${\displaystyle y_{p}=e^{-2x}(-{\frac {4}{25}}xcos2x-{\frac {3}{25}}xsin2x)\!}$ And finally, we arrive at the general solution of the given ordinary differential equation: ${\displaystyle y=y_{k}+y_{p}\!}$ ${\displaystyle y=(c_{1}+c_{2}x)e^{-2x}+e^{-2x}(-{\frac {4}{25}}xcos2x-{\frac {3}{25}}xsin2x)\!}$ Solving for the initial conditions given and first plugging in ${\displaystyle y(0)=1\!}$, we get that: ${\displaystyle 1=(c_{1}+c_{2}(0))e^{-2(0)}+e^{-2(0)}(-{\frac {4}{25}}(0)cos2(0)-{\frac {3}{25}}(0)sin2(0))\!}$ ${\displaystyle 1=(c_{1}+c_{2}(0))e^{0}\!}$ ${\displaystyle \therefore c_{1}=1\!}$ The second initial condition that was given to us, ${\displaystyle y'(0)=-1.5\!}$ can now be plugged in: ${\displaystyle y'=-{\frac {1}{5}}e^{-}2x(10c_{1}+10c_{2}x-5c_{2}+(3-14x)sin2x+(4-2x)cos(2x))\!}$ ${\displaystyle -1.5=-{\frac {1}{5}}(10c_{1}-5c_{2}+4)\!}$ ${\displaystyle \therefore c_{2}=-3.5\!}$ And once we substitute these values, we get the following solution for this IVP: ${\displaystyle y=(1-3.5x)e^{-2x}+e^{-2x}(-{\frac {4}{25}}xcos2x-{\frac {3}{25}}xsin2x)\!}$
# How would rainbows appear on other planets? Are other planets capable of producing rainbows? How would those rainbows appear? Can rain, clouds or ice from elements other than water produce rainbows? note 1: I've verified @JamesK's answer's index of refraction of 1.27 (since no source was cited), at least for a temperature of 111K, yay! On a colder day, say 90K, the index goes up and the rainbow will shrink a few degrees, close to the size of that on Earth. Source for methane: Source for water: Now @CarlWitthoft shows two unlabeled plots with no sources cited and very different values for $$n$$. note 2: @CarlWitthoft's unsourced claim that methane has a significantly lower dispersion than water in visible light appears to be without merit. I've plotted both materials on the same axis and they are comparable. The rainbows will have slightly different spreading of colors, but I do not thing the rainbow will disappoint! @JamesK's answer mentions that Titan could see rainbows from liquid methane rain. Using math from 1, 2, 3: $$k = \frac{n_{droplet}}{n_{atmosphere}}$$ $$\alpha = \arcsin\left(\sqrt{ \frac{r-k^2}{3} } \right)$$ $$\beta = \arcsin\left( \frac{\sin\alpha}{k} \right)$$ $$\theta = 2\phi = 4\beta - 2\arcsin(k \sin \beta)$$ Actually, lower index makes the rainbow larger. Remember that red is on the outside. With $$k=4/3\approx1.33$$ the rainbow is at ~42°, for $$k=1.27$$ it blows up to ~52°. All else equal it would be a little brighter as well; with a larger incident angle at the back of the drop, the fresnel reflection will be a bit stronger. Source # https://www.stewartcalculus.com/data/ESSENTIAL%20CALCULUS%202e/upfiles/instructor/eclt_wp_0301_inst.pdf import numpy as np import matplotlib.pyplot as plt halfpi, pi, twopi = [f*np.pi for f in (0.5, 1, 2)] k = np.linspace(1.2, 1.5, 31) alpha = np.arcsin(np.sqrt((4.-k**2)/3.)) beta = np.arcsin(np.sin(alpha)/k) phi = 2*beta - np.arcsin(k*np.sin(beta)) theta = 2 * phi things = (alpha, beta, theta) names = ('alpha', 'beta', 'theta = 2phi') if True: plt.figure() for i, (thing, name) in enumerate(zip(things, names)): plt.subplot(3, 1, i+1) plt.plot(k, degs*thing) plt.title(name, fontsize=16) plt.plot(k[7], degs*thing[7], 'ok') plt.plot(k[13], degs*thing[13], 'ok') plt.show() • I can't tell but I think you've missed the important part: water is dispersive (delta $n$ with $\lambda$ ; if methane is not then all wavelengths enter and exit at the same angle, and no rainbow. – Carl Witthoft Feb 22 '19 at 16:40 • @CarlWitthoft "...if methane is not (dispersive)..." can you name even one dielectric that isn't? Dispersion in visible wavelengths comes from absorption in the UV and is a pretty universal attribute of collections of atoms. I think you mean "substantially less dispersive than water" – uhoh Feb 22 '19 at 23:04 • Regarding the refractive index of methane, this may be of use (pdf) – user24157 Feb 22 '19 at 23:48 • @mistertribs thank you very much; I've incorporated that into my answer. – uhoh Feb 23 '19 at 11:41 Rainbows occur when sunlight shines through rain. This is rare in the solar system. Rain (of sulphuric acid) might be common enough under Venus's clouds, but there is no sun. Conversely, there is plenty of sun in Mars, but no rain, and only very rare clouds. It rains on Titan: methane rain. Methane has a lower refractive index than water (1.27 instead of 1.33), which would make the rainbows slightly larger (though not by much 42->52). However the atmosphere of Titan is hazy, and while there is some light on the surface, the sun's disc is not visible. There is rain in some layers of the gas giants, but again not on the outer layers where the sun is visible. It is likely that the Earth is the only place in the solar system where rainbows are a common phenomenon. • Maybe they are there but we can't see them because the sun, planets outside Earth's orbit and observer is never around that 40degree angle needed to produce a rainbow from the Sun of the atmosphere. – Muze Feb 21 '19 at 21:07 • Yes. Earth should be the only place where rainbows are vulgar. Other celestrial bodies should be also able to support rainbows where there is mist or vapor of some chemical, and enough sunlight, but those criteria are rarely met. – Max0815 Feb 22 '19 at 2:56 • It's not the refractive index which leads to rainbows but rather the dispersion (variation of $n$ with wavelength). – Carl Witthoft Feb 22 '19 at 16:38 • @CarlWitthoft When dispersion is low (or the spreading is otherwise confounded) there will still be a rainbow, but it will be less colorful; it may stop dispersing but it doesn't stop refracting! See What actually happens to reduce the perceived color in a 'white rainbow“ or ”fog-bow"? – uhoh Feb 23 '19 at 0:14 • What do you mean by "but there is no sun (in Venus)"? – Nilay Ghosh May 17 at 5:22 Take a look at these charts. The methane one is the best I could find on a quick search, but it suggests the dispersion over the visible wavelength band is a fraction of the value for water. Since the existence of a rainbow depends on the ability of the substance to 'bend' different wavelengths different amounts, you can see that methane, at least, would produce a rather unsatisfying rainbow. And even that assumes that you had an atmosphere which supported methane droplets of an appropriate size to achieve a prismatic effect. Roughly speaking, you would want the methane droplets to be larger than the water droplets which produce rainbows on Earth by the ratio of their dispersions. This is because the angular output spread depends in part on the length of path thru the droplets. • Any differences in the range of color in the rainbow? Keep in mind not only the form of rain can produce rainbow. Clouds of Jupiter and other planets can as well. – Muze Feb 22 '19 at 19:43 • @Muze Unless the molecule in question (water, methane or other) has a severely sharp absorption edge, the color range is limited only by our retinal ability to discriminate wavelengths. – Carl Witthoft Feb 22 '19 at 19:44 • Yes but doesn't most transparent liquids refract light? – Muze Feb 22 '19 at 19:46 • @Muze there's two things here that often get lumped together, and they shouldn't be. While refract just means bend, disperse means bend different colors differently. If you had rain droplets (or prisms) with low dispersion you would still get a rainbow, but it would be white. What actually happens to reduce the perceived color in a 'white rainbow“ or ”fog-bow"? Carl and many others might be "unsatisfied" by it, but it would still be there, narrower and more concentrated but less colorful. – uhoh Feb 22 '19 at 23:58 • @uhoh yeah you're partly right - the angular output (not just the translation) depends on the entrance and exit angles more than the droplet size. – Carl Witthoft Feb 25 '19 at 12:30
# HCF & LCM of numbers Are you preparing for campus placements,Banking,SSC, IAS, Insurance,Defence and other competitive exams? Then, make sure to take some time in practicing the LCM & HCF questions and answer in Quantitative Aptitude. Moreover, only those questions are included that are relevant and likely to be asked in any competitive exam. So, take these questions and answer, brush up your skills and practice to stay fully prepared for any your exam. • Q1.Find the L.C.M. of $\frac{108}{375},1\frac{17}{25},\frac{54}{55}=?$ • Q2.For how many value of ‘P’ the LCM of P and 20 will be 40. • Q3.Find the greatest number of 4 digits and the least number of 5 digits that have as their H.C.F. 147 • Q4.There is a number greater than 1 which when divided by 4, 5 and 6 leaves the same remainder of 3 in each case. Find the largest number, smaller than 1000 which satisfy the given condition. • Q5.Find the largest number which can exactly divide 216, 252, 294 • Q6.A rectangular courtyard 15 meters17 cms long 9 meters and 2 cms wide is to be paved exactly with square tiles, all of the same size. What is the largest size of the tile which could be used for the purpose? • Q7.Ram wants to utilise his unused field and plan to plant some trees, he plant 88 guava trees, 132 papaya trees and 220 sugarcane trees in equal rows (in terms of number of trees). Also, he wants to make distinct rows of trees (i.e. only one type of tree in one row).Calculate the minimum number of rows?
## College Algebra (11th Edition) $x=35$ $\bf{\text{Solution Outline:}}$ To solve the given equation, $(x-3)^{2/5}=4 ,$ raise both sides to the exponent equal to $\dfrac{5}{2} .$ Then use the laws of exponents and the concepts of rational exponents to solve the resulting equation. Finally, do checking if the solution satisfies the original equation. $\bf{\text{Solution Details:}}$ Raising both sides to the exponent equal to $\dfrac{5}{2} ,$ the given equation becomes \begin{array}{l}\require{cancel} \left( (x-3)^{2/5} \right)^{5/2}=4^{5/2} .\end{array} Using the Power Rule of the laws of exponents which is given by $\left( x^m \right)^p=x^{mp},$ the expression above is equivalent to \begin{array}{l}\require{cancel} (x-3)^{\frac{2}{5}\cdot\frac{5}{2} }=4^{\frac{5}{2}} \\\\ x-3=4^{\frac{5}{2}} .\end{array} Using the definition of rational exponents which is given by $a^{\frac{m}{n}}=\sqrt[n]{a^m}=\left(\sqrt[n]{a}\right)^m,$ the expression above is equivalent to \begin{array}{l}\require{cancel} x-3=\left(\sqrt{4}\right)^{5} \\\\ x-3=\left(2\right)^{5} \\\\ x-3=32 \\\\ x=32+3 \\\\ x=35 .\end{array} Upon checking, $x=35$ satisfies the original equation.
A nitrogen and oxygen containing molecule is decomposed into its elements. It is found to contain 14 g of nitrogen and 40 g of oxygen. What is the empirical formula of the compound? Jan 27, 2016 ${\text{N"_2"O}}_{5}$ Explanation: In order to find a compound's empirical formula you must find the smallest whole number ratio that exists between the elements that are a part of this compound. You know that your unknown compound contains $\text{14 g}$ of nitrogen and $\text{40 g}$ of oxygen. Your first goal here will be to use the molar masses of these two elements to figure out how many moles of each you get in this sample. $\text{For N: " 14 color(red)(cancel(color(black)("g"))) * "1 mole N"/(14.007color(red)(cancel(color(black)("g")))) = "0.9995 moles N}$ $\text{For O: " 40 color(red)(cancel(color(black)("g"))) * "1 mole O"/(15.9994 color(red)(cancel(color(black)("g")))) = "2.500 moles O}$ In order to find the mole ratio that exists between these two elements in the compound, divide both values by the smallest one "For N: " (0.9995 color(red)(cancel(color(black)("moles"))))/(0.9995color(red)(cancel(color(black)("moles")))) = 1 "For O: " (2.500 color(red)(cancel(color(black)("moles"))))/(0.9995color(red)(cancel(color(black)("moles")))) = 2.501 ~~ 2.5 In order to correctly determine the empirical formula of the compound, you need the smallest whole number ratio that exists between these two elements. To get that, simply multiply both values by $2$. This will get you ("N"_1"O"_2.5)_2 <=> "N"_2"O"_5
# Chapter 1 - Section 1.8 - Introduction to Variables, Algebraic Expressions, and Equations - Exercise Set: 74 $20n$ #### Work Step by Step A number times twenty=20n. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
I wrote this for a guest post on Cathy O’Neil’s blog mathbabe. Climate change is one of those issues that I heard about as a kid, and I assumed naturally that scientists, political leaders, and the rest of the world would work together to solve it. Then I grew up and realized that never happened. Carbon dioxide emissions are continuing to rise and extreme weather is becoming normal. Meanwhile, nobody in politics seems to want to act, even when major scientific organizations — and now the World Bank — have warned us in the strongest possible terms that the current path towards ${4^{\circ} C}$ or more warming is an absolutely terrible idea (the World Bank called it “devastating”). A little frustrated, I decided to show up last fall at my school’s umbrella environmental group to hear about the various programs. Intrigued by a curious-sounding divestment campaign, I decided to show up at the first meeting. I had zero knowledge of or experience with the climate movement, and did not realize what it was going to become. Divestment from fossil fuel companies is a simple and brilliant idea, popularized by Bill McKibben’s article “Global Warming’s Terrifying New Math.” As McKibben observes, there are numerous reasons to divest, both ethical and economic. The fossil fuel reserves of these companies — a determinant of their market value — are five(!) times what scientists estimate can be burned to stay within 2 degree warming. Investing in fossil fuels is therefore a way of betting on climate change. It’s especially absurd for universities to invest in them, when much of the research on climate change took place there. The other side of divestment is symbolic. It’s not likely that Congress will be able to pass a cap-and-trade or carbon tax system anytime soon, especially when fossil fuel companies are among the biggest contributors to political campaigns. A series of university divestments would draw attention to the problem. It would send a message to the world: that fossil fuel companies should be shunned, for basing their business model on climate change and then for lying about its dangers. This reason echoes the apartheid divestment campaigns of the 1980s. With support from McKibben’s organization 350.org, divestment took off last fall to become a real student movement, and today, over 300 American universities have active divestment campaigns from their students. Four universities — Unity College, Hampshire College, Sterling College, and College of the Atlantic — have already divested. Divestment is spreading both to Canadian universities and to other non-profit organizations. We’ve been covered in the New York Times, endorsed by Al Gore, and, on the other hand, recently featured in a couple of rants by Fox News. Divest Harvard At Harvard, we began our fall semester with a small group of us quietly collecting student petition signatures, mostly by waiting outside the dining halls, but occasionally by going door-to-door among dorms. It wasn’t really clear how many people supported us: we received a mix of enthusiasm, indifference, and occasional amusement from other students. But after enough time, we made it to 1,000 petition signatures. That was enough to allow us to get a referendum on the student government ballot. The ballot is primarily used to elect student government leaders, but it was our campaign that rediscovered the use of referenda as a tool of student activism. (Following us, two other worthy campaigns — one on responsible investment more generally and one about sexual assault — also created their own referenda.) After a week of postering and reaching out to student groups, our proposition—that Harvard should divest—won with 72% of the undergraduate student vote. That was a real turning point for us. On the one hand, having people vote on a referendum isn’t the same as engaging in the one-on-one conversations that we did when convincing people to sign our petition. On the other hand, the 72% showed that we had a real majority in support. The statistic was quickly picked up by the media, since we were the first school to win a referendum on divestment (UNC has since had a winning referendum with 77% support). That was when the campaign took off. People began to take us seriously. The Harvard administration, which had previously said that they had no intention of considering divestment, promised a serious, forty-five minute meeting with us. We didn’t get what we had aimed for — a private meeting with President Drew Faust — but we had acquired legitimacy from the administration. We were hopeful that we might be able to negotiate a compromise, and ended our campaign last fall satisfied, plotting the trajectory of our campaign at our final meeting. The spring semester started with a flurry of additional activity and new challenges. On the one hand, we had to plan for the meeting with the administration—more precisely, the Corporation Committee on Social Responsibility. (The CCSR is the subgroup of the Harvard Corporation that decides on issues such as divestment.) But we also knew that the fight couldn’t be won solely within the system. We had to work on building support on campus, from students and faculty, with rallies and speakers; we also had to reach out to alumni and let them know about our campaign. Fortunately, the publicity generated last semester had brought in a larger group of committed students, and we were able to split our organization into working groups to handle the greater responsibilities. In Februrary, we got our promised meeting with three members the administration. With three representatives from our group meeting with the CCSR, we had a rally with about 40 people outside to show support: In the meeting, the administration representatives reiterated their concern about climate change, but questioned divestment as a tool. Unfortunately, since the meeting, they have continued to reiterate their “presumption against divestment” (a phrase they have used with previous movements). This is the debate we—and students across the nation—are going to have to win. Divestment alone isn’t going to slow the melting of the Arctic, but it’s a powerful tool to draw attention to climate change and force action from our political system—as it did against apartheid in the 1980s. There isn’t much time left. One of the most inspirational things I’ve heard this semester was at the Forward on Climate rally in Washington, D.C. last month, which most of our group attended. Addressing a crowd of 40,000 people, Bill McKibben said “All I ever wanted to see was a movement of people to stop climate change, and now I’ve seen it.” To me, that’s one of the exciting and hopeful aspects about divestment—that it’s a movement of the people. It’s fundamentally an issue of social justice that we’re facing, and our group’s challenge is to convince Harvard to take it seriously enough to stand up against the fossil fuel industry. In the meantime, our campaign has been trying to build support from student groups, alumni, and faculty. In a surprise turnaround, one of our members convinced alumnus Al Gore to declare his support for the divestment movement at a recent event on campus. We organized a teach-in the Tuesday before last featuring writer and sociologist Juliet Schor. On April 11, we will be holding a large rally outside Massachusetts Hall to close out the year and to show support for divestment; we’ll be presenting our petition signatures to the administration. Here’s our most recent picture, taken for the National Day of Action, with some supportive friends from the chess club: Thanks to Joseph Lanzillo for proofreading a draft of this post.