content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
4 $\begingroup$ I'm currently learning Calculus 2, more specifically I'm learning about sequences and series. I'm not enjoying this section as much as I thought I would, this is because I'm having to learn all these different tests to determine the convergence and being shown no justification as to why it works. I've been shown the proofs, but the proofs are not justifying to my mind why they even work. Limit Comparison Test: Suppose that we have two series $\displaystyle\sum a_n $ and $\displaystyle\sum b_n$ with $a_n\geq0$,$b_n>0 $ $\forall n$. Define, $$ c = \displaystyle\lim_{n\to \infty} \frac{a_n}{b_n} $$ if $c$ is positive (i.e. $c>0$) and is finite (i.e. $c<\infty$) then either both series converge or both series diverge. The first question I'd like to ask is, why does this even work? The second question I'd like to ask is, under what conditions does this work? Why do I ask this? Consider the two following series: $ \displaystyle\sum_{n=1}^{\infty} \frac{1}{n^3}$ and $\displaystyle\sum_{n=1}^{\infty} \frac{1}{n^2}$.Both are p-series and a p-series converges when $p>1$ and diverges when $p\leq1$. Therefore, the two series above converges. Trying to verify this with the limit comparison test would go something like this $$ \displaystyle\lim_{n\to \infty} \frac{n^2}{n^3} = \displaystyle\lim_{n\to \infty} \frac{1}{n} =0$$ $c\not>0$ which is implying that both series don't converge. So, what is going on? One small final question, I'd really like to improve in this part of the course and be in a position where I don't have to remember all these annoying tests and just be able to derive certain things from logic. Would this be too much to hope for considering I'm only doing a Calculus course and not something like Real-Analysis. $\endgroup$ 4 • $\begingroup$ read the proof to understand why it works $\endgroup$ Jan 9 '17 at 1:05 • $\begingroup$ That's not how the limit comparison test works. It says that if $\sum a_n$ converges, then $\sum b_n$ converges. If $\sum a_n$ diverges, then $\sum b_n$ diverges. And only if it holds for that limit. It doesn't so limit comparison test doesn't work there. $\endgroup$ Jan 9 '17 at 1:07 • $\begingroup$ By the way, here is the proof $\endgroup$ Jan 9 '17 at 1:31 • $\begingroup$ @SimpleArt Thanks, but I've seen that proof. Unfortunately, I'm still not able to conclude why it works even after looking at the proof :( $\endgroup$ – user405274 Jan 9 '17 at 1:33 3 $\begingroup$ The limit comparison test is very powerful. One of my favorite applications is this: Determine the convergence/divergence of $$\sum_{n=1}^\infty \frac 1{n^{1+1/n}}.$$ The heuristic is very simple. For large $n$, we're saying that basically $a_n = cb_n$ (for some positive number $c$). Ignoring small values of $n$, then $\sum a_n = c\sum b_n$, so obviously both series converge together or diverge together. The rigorous proof is only slightly more intricate, sandwiching $a_n$ between $c'b_n$ and $c''b_n$ for $0<c'<c<c''$. Your logic in your second question is flawed. We're not saying things are "if and only if." We're saying that provided $\lim a_n/b_n = c$ and $0<c<\infty$, we can make an inference. We (ostensibly) know nothing if $c=0$ or $c=\infty$. You should try to develop intuition based on this sort of comparison. Have a short list of series you know are convergent and divergent. Then try to say to yourself, "When $n$ is large, what do these terms look like?" (I.e., what are a convenient $b_n$ and what is the $c$?) Try this out with the one I gave at the outset. $\endgroup$ Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.996286
Email Repair Exchange Database Hard Repair Process Using ESEUTIL /P Cmdlet Summary: Hard Repair or Hard Recovery (EseUtil /p) cmdlet is used for repairing severely damaged or corrupt Exchange database that can’t be fixed using the Soft Recovery (EseUtil /r) command. In this article, you will learn how to recover a database by using the EseUtil /p command. To avoid data loss and hard coding the database due to hard recovery, you can use an Exchange recovery software, such as Stellar Repair for Exchange. Microsoft Exchange Server utilizes the Write-Ahead Logging (WAL) technique to maintain the database integrity, reduce the disk I/O, and avoid performance issues. Any changes made in the database are first stored as logs in the append-only log files and then committed to the in-memory copy of the mailbox database. This way, Exchange Server ensures database consistency. However, if the logs are not committed to the database, the database becomes inconsistent, enters Dirty Shutdown state, and dismounts from the server. Such a situation may occur due to one or more of the following reasons: • Sudden power cut • Hardware failure • Software failure like updates • Third-party software are not application compatible • Incorrect antivirus configuration • Human error • Malware and viruses • Lack of storage space mailbox mount status You can replay the changes stored in the uncommitted logs on the database copy using the EseUtil Soft Recovery command (EseUtil /r) to recover the database and mount it back on the server to restore mailbox connectivity. If the Soft Recovery fails or you can’t find the logs that are required to recover the database, you have to rely on the EseUtil Hard Recovery or Hard Repair process. In this guide, you will learn when and how to safely use the Hard Repair or Hard Recovery cmdlet—EseUtil /P—to recover inaccessible, inconsistent, or corrupt databases from the Dirty Shutdown (dismounted) to Clean Shutdown (mounted) state. Steps to Perform Exchange Database Hard Repair Process using EseUtil Follow the steps discussed below to perform the hard repair process using EseUtil cmdlets and recover a corrupt or inaccessible Exchange database. Step 1: Verify the Database Status On the Exchange Server, open Command Prompt or the Exchange Management Shell (EMS) as administrator and navigate to the location of the affected EDB file using the CD command. Then execute the following command to check the database status: EseUtil /mh For instance, EseUtil /mh “EX01DB02.edb” check the mailbox database state Check the State. If it displays a Dirty Shutdown, the database needs repair and recovery. If the logs are available, try Soft Recovery. However, if Soft Recovery fails or the logs are missing or deleted, skip to the next step. Step 2: Backup Database Files Before running EseUtil cmdlets to recover an inconsistent or corrupt database, you must back up the database folder and logs folder to a safe location. This will help prevent the permanent loss of mail items or mailboxes that may occur during the hard repair process. To create a backup of failed, corrupt, or inconsistent Exchange database, go to the folder location and copy it to an external or internal storage volume. Step 3: Run Hard Repair Command You can run the following command to execute the hard repair process on the affected database for recovery. Make sure the drive where the database is stored has free space available, at least 1.2 times of the database size. EseUtil /p “EX01DB02.edb” run hard recovery on the database You will see a warning message stating that this may cause the information to be lost. If you accept the risk of data loss, click OK and start the hard repair process. check the database integrity This may take a while to complete. During the repair process, it may remove the irrecoverable mailboxes and mail items, including any changes that were made but not committed to the database. Thus, there’s a huge risk of losing important mail items. Warning: Do not close or stop the hard repair process when started as it can permanently damage the database. Step 4: Move Mailboxes to a New Database Once a database is recovered or repaired using the Hard Repair EseUtil cmdlets, it is marked as hard-coded. Besides, it’s not safe to keep using a repaired database. Thus, you must move all mailboxes from the recovered Exchange mailbox database to another database on the same or another server. Final Thoughts While the hard repair process using EseUtil /p command can restore an inaccessible, inconsistent, or corrupt database, it may remove the irrecoverable information and changes to mailboxes due to uncommitted logs. However, to avoid data loss and the hassle of running EseUtil and IsInteg cmdlets, you can use advanced Exchange database recovery software, such as Stellar Repair for Exchange. Unlike EseUtil hard repair or hard recovery, the software is GUI-based and does not alter the original database file. It runs in read-only mode to repair the database structure and extracts all mailboxes from the corrupt database with complete integrity. After the recovery, you can save the recovered mailboxes as individual PST files that you can easily import into Exchange Server or Outlook. You may also export the mailboxes recovered from damaged database files directly to a live Exchange database or Office 365 tenant in a few clicks. The software can make a big difference when it comes to the downtime that your organization may have to experience if you choose the Hard Repair process. With the help of the software, you can not only avoid data loss but also reduce downtime by up to 75%. FAQ Is it safe to run EseUtil /p on the database? The EseUtil /p command is safe if you create a backup of the database files before running it. Also, you should never stop the Hard Recovery process until it completes as it can completely damage the database file beyond recovery. How much time EseUtil/p process is going to take? EseUtil utility runs at approximately 3 to 6 GB per hour and defragmentation takes 9 GB per hour. The exact time depends on your hardware and production environment. EseUtil /p crashes when repairing a damaged Exchange database. Make sure you have enough space (at least 110 percent or 1.2x of the current database you're repairing) on the drive where you are performing the repair. It is also recommended to disable the virus scan temporarily, while you perform the repair. progress 77% of people found this article helpful    
__label__pos
0.577087
Running Synchronized Code Inside an Async Task Discussion in 'Plugin Development' started by kmecpp, Feb 10, 2015. Thread Status: Not open for further replies. 1. Offline kmecpp Hopefully some people still use these forums! I need to spawn in an entity with an asynchronous task and I managed to achieve this by using a nested sync task inside of my async one, but i was curious as to how to do this without Bukkit schedulers, just pure Java. Does anyone have any ideas on how i could do that?   2. Trust me, plenty of people still use these forums. ;) What is your code so far? Could I see it?   3. Offline xTrollxDudex There is no thread safe way to do it other than that, unless you want to actually go deeper and see what is going on in the background. If you have thought over this, and you REALLY want to do it, there is a Queue in net.minecraft.server.MinecraftServer called processQueue, the instance of MinecraftServer can be obtained using CraftServer. Add your runnable to that queue and it will run it on the primary thread. Otherwise, do not try doing this with pure java - entities and worlds are not thread safe. Using sync tasks in an async task is perfectly fine, although I fail to see the reason behind doing such a thing asynchronously.   Last edited by a moderator: Feb 10, 2015 Thread Status: Not open for further replies. Share This Page
__label__pos
0.62669
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. Is it possible to load a xaml file from disk (ie not from an application resource) and create the object tree without creating the outer object? In other words, I want to create a class that derives from Window and loads a xaml file from disk. It seems I can either create a class that does not derive from Window and can load from disk, or I can create a class that derives from Window but loads the xaml from an application resource. For example, I can do this: XmlTextReader xmlReader = new XmlTextReader("c:\\mywindow.xaml"); object obj = XamlReader.Load(xmlReader); Window win = obj as Window; but what I really want to do is this: class MyWindow : Window { public MyWindow() { System.Uri resourceLocater = new System.Uri("file://c:/mywindow.xaml", UriKind.Absolute); System.Windows.Application.LoadComponent(this, resourceLocater); } } ... MyWindow w = new MyWindow(); Currently the second bit of code gives an exception saying that the uri cannot be absolute. share|improve this question      What a great idea, libraries of windows or components just waiting to be used - brilliant. –  MrTelly Apr 10 '09 at 5:11 2 Answers 2 up vote 1 down vote accepted You can load the content of a XAML file into a string and then parse the content, like this: try { string strXaml = String.Empty; using (var reader = new System.IO.StreamReader(filePath, true)) { strXaml = reader.ReadToEnd(); } object xamlContent = System.Windows.Markup.XamlReader.Parse(strXaml); } catch (System.Windows.Markup.XamlParseException ex) { // You can get specific error information like LineNumber from the exception } catch (Exception ex) { // Some other error } Then you should be able to set the xamlContent to the Content property of the Window. Window w = new Window(); w.content = xamlContent; w.ShowDialog(); share|improve this answer I'm not sure you can load an assembly with an absolute path, pointing to a file somewhere on the file system. I had a similar problem a few days ago, maybe my post can be of help (look at the edit of my answer): http://stackoverflow.com/questions/709087/load-a-resourcedictionary-from-an-assembly edit: I just saw you want to load a xaml, not an assembly? Then check up on System.Windows.Markup.XamlReader, maybe this is what you are looking for. share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.837123
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I was looking at the source for Drupal 7, and I found some things I hadn't seen before. I did some initial looking in the php manual, but it didn't explain these examples. What does the keyword static do to a variable inside a function? function module_load_all($bootstrap = FALSE) { static $has_run = FALSE share|improve this question 6 Answers 6 up vote 25 down vote accepted It makes the function remember the value of the given variable ($has_run in your example) between multiple calls. You could use this for different purposes, for example: function doStuff() { static $cache = null; if ($cache === null) { $cache = '%heavy database stuff or something%'; } // code using $cache } In this example, the if would only be executed once. Even if multiple calls to doStuff would occure. share|improve this answer      Also, if the function has run once, it will not reset the value of $cache to null on later calls, right? –  user151841 Jul 6 '11 at 14:18 4   @user151841 $cache will only be reset between requests. So yes, it will not be reset on laters calls in the same request (or execution of the script). –  Yoshi Jul 6 '11 at 14:23      @Yoshi, Can you give me answer of stackoverflow.com/questions/17022047/… question? –  Jimit Jun 10 '13 at 11:10      why the variable $cache is not reinitialized to null in second call??? –  Muhammad Mar 13 '14 at 8:09 2   @Muhammad because that's just what the keywords static does. –  Yoshi Mar 13 '14 at 8:24 Given the following example: function a($s){ static $v = 10; echo $v; $v = $s; } First call of a(20); will output 10, then $v to be 20. The variable $v is not garbage collected after the function ends, as it is a static (non-dynamic) variable. The variable will stay within its scope until the script totally ends. Therefore, the following call of a(15); will then output 20, and then set $v to be 15. share|improve this answer 2   "The variable $v is not garbage collected after the function ends," this helps me understand the behavior this keyword produces :) –  user151841 May 31 '11 at 14:29 Static works the same way as it does in a class. The variable is shared across all instances of a function. In your particular example, once the function is run, $has_run is set to TRUE. All future runs of the function will have $has_run = TRUE. This is particularly useful in recursive functions (as an alternative to passing the count). A static variable exists only in a local function scope, but it does not lose its value when program execution leaves this scope. See http://php.net/manual/en/language.variables.scope.php share|improve this answer static variable in a function means that no matter how many times you call the function, there's only 1 variable. <?php class Foo{ protected static $test = 'Foo'; function yourstatic(){ static $test = 0; $test++; echo $test . "\n"; } function bar(){ $test = 0; $test++; echo $test . "\n"; } } $f = new Foo(); $f->yourstatic(); // 1 $f->yourstatic(); // 2 $f->yourstatic(); // 3 $f->bar(); // 1 $f->bar(); // 1 $f->bar(); // 1 ?> share|improve this answer Inside a function, static means that the variable will retain its value each time the function is called during the life of the page load. Therefore in the example you've given, if you call a function twice, if it set $has_run to true, then the function would be able to know that it had previously been called because $has_run would still be equal to true when the function starts the second time. The usage of the static keyword in this context is explained in the PHP manual here: http://php.net/manual/en/language.variables.scope.php share|improve this answer Seems like nobody mentioned that static variables inside different instances even remain their state after. So be careful when writing OOP code. Consider this: class Foo { public function call() { static $test = 0; $test++; echo $test . PHP_EOL; } } $a = new Foo(); $a->call(); // 1 $a->call(); // 2 $a->call(); // 3 $b = new Foo(); $b->call(); // 4 $b->call(); // 5 If you want a static variable to remember its state only for current class instance, you'd better stick to a class property, like this: class Bar { private $test = 0; public function call() { $this->test++; echo $this->test . PHP_EOL; } } $a = new Bar(); $a->call(); // 1 $a->call(); // 2 $a->call(); // 3 $b = new Bar(); $b->call(); // 1 $b->call(); // 2 share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.718714
lkml.org  [lkml]   [2007]   [May]   [1]   [last100]   RSS Feed Views: [wrap][no wrap]   [headers]  [forward]    Messages in this thread / Date From SubjectRe: [RFC, PATCH 3/4] SoC base drivers: ASIC3 driver On Tue, 1 May 2007 08:09:48 +0300 Paul Sokolovsky <[email protected]> wrote: > Hello linux-kernel, > > Note: This driver depends on ds1wm.h header, recently submitted, and which by now should be in -mm tree. > ----- > > asic3_base: SoC base driver for ASIC3 chip. > > Signed-off-by: Paul Sokolovsky <[email protected]> > > ... > > + > +struct asic3_data > +{ struct asic3_data { > + void *mapping; > + unsigned int bus_shift; > + int irq_base; > + int irq_nr; > + > + u16 irq_bothedge[4]; > + struct device *dev; > + > + struct platform_device *mmc_dev; > +}; > + > +static spinlock_t asic3_gpio_lock; DEFINE_SPINLOCK(), please - it's better to do it at compile-time. > +static int asic3_remove(struct platform_device *dev); > + > +static inline unsigned long asic3_address(struct device *dev, > + unsigned int reg) > +{ > + struct asic3_data *adata; > + > + adata = (struct asic3_data *)dev->driver_data; > + > + return (unsigned long)adata->mapping + (reg >> (2 - adata->bus_shift)); > +} > + > +void asic3_write_register(struct device *dev, unsigned int reg, u32 value) > +{ > + __raw_writew(value, asic3_address(dev, reg)); > +} > +EXPORT_SYMBOL(asic3_write_register); > + > +u32 asic3_read_register(struct device *dev, unsigned int reg) > +{ > + return __raw_readw(asic3_address(dev, reg)); > +} > +EXPORT_SYMBOL(asic3_read_register); > + > +static inline void __asic3_write_register(struct asic3_data *asic, > + unsigned int reg, u32 value) > +{ > + __raw_writew(value, (unsigned long)asic->mapping > + + (reg >> (2 - asic->bus_shift))); > +} > + > +static inline u32 __asic3_read_register(struct asic3_data *asic, > + unsigned int reg) > +{ > + return __raw_readw((unsigned long)asic->mapping > + + (reg >> (2 - asic->bus_shift))); > +} Why __raw_*() here? How come we're using the io.h functions here, but [patch 2/4] open-coded it? > +#define ASIC3_GPIO_FN(get_fn_name, set_fn_name, REG) \ > +u32 get_fn_name(struct device *dev) \ > +{ \ > + return asic3_read_register(dev, REG); \ > +} \ > +EXPORT_SYMBOL(get_fn_name); \ > + \ > +void set_fn_name(struct device *dev, u32 bits, u32 val) \ > +{ \ > + unsigned long flags; \ > + \ > + spin_lock_irqsave(&asic3_gpio_lock, flags); \ > + val |= (asic3_read_register(dev, REG) & ~bits); \ > + asic3_write_register(dev, REG, val); \ > + spin_unlock_irqrestore(&asic3_gpio_lock, flags); \ > +} \ > +EXPORT_SYMBOL(set_fn_name); > + > +#define ASIC3_GPIO_REGISTER(ACTION, action, fn, FN) \ > + ASIC3_GPIO_FN (asic3_get_gpio_ ## action ## _ ## fn , \ > + asic3_set_gpio_ ## action ## _ ## fn , \ > + _IPAQ_ASIC3_GPIO_ ## FN ## _Base \ > + + _IPAQ_ASIC3_GPIO_ ## ACTION ) > + > +#define ASIC3_GPIO_FUNCTIONS(fn, FN) \ > + ASIC3_GPIO_REGISTER (Direction, dir, fn, FN) \ > + ASIC3_GPIO_REGISTER (Out, out, fn, FN) \ > + ASIC3_GPIO_REGISTER (SleepMask, sleepmask, fn, FN) \ > + ASIC3_GPIO_REGISTER (SleepOut, sleepout, fn, FN) \ > + ASIC3_GPIO_REGISTER (BattFaultOut, battfaultout, fn, FN) \ > + ASIC3_GPIO_REGISTER (AltFunction, alt_fn, fn, FN) \ > + ASIC3_GPIO_REGISTER (SleepConf, sleepconf, fn, FN) \ > + ASIC3_GPIO_REGISTER (Status, status, fn, FN) > + > +ASIC3_GPIO_FUNCTIONS(a, A) > +ASIC3_GPIO_FUNCTIONS(b, B) > +ASIC3_GPIO_FUNCTIONS(c, C) > +ASIC3_GPIO_FUNCTIONS(d, D) Ho hum, fair enough. Was it deliberate that get_fn_name() and set_fn_name() are given global scope? I guess so, given that they're exported to modules. Please remove the space between the function or macro name and the "(" (whole patchset). > +int asic3_gpio_get_value(struct device *dev, unsigned gpio) > +{ > + u32 mask = ASIC3_GPIO_bit(gpio); > + printk("%s(%d)\n", __FUNCTION__, gpio); > + switch (gpio >> 4) { > + case _IPAQ_ASIC3_GPIO_BANK_A: > + return asic3_get_gpio_status_a(dev) & mask; > + case _IPAQ_ASIC3_GPIO_BANK_B: > + return asic3_get_gpio_status_b(dev) & mask; > + case _IPAQ_ASIC3_GPIO_BANK_C: > + return asic3_get_gpio_status_c(dev) & mask; > + case _IPAQ_ASIC3_GPIO_BANK_D: > + return asic3_get_gpio_status_d(dev) & mask; > + } > + > + printk(KERN_ERR "%s: invalid GPIO value 0x%x", __FUNCTION__, gpio); > + return 0; > +} > +EXPORT_SYMBOL(asic3_gpio_get_value); > + > +void asic3_gpio_set_value(struct device *dev, unsigned gpio, int val) > +{ > + u32 mask = ASIC3_GPIO_bit(gpio); > + u32 bitval = 0; > + if (val) bitval = mask; > + printk("%s(%d, %d)\n", __FUNCTION__, gpio, val); > + > + switch (gpio >> 4) { > + case _IPAQ_ASIC3_GPIO_BANK_A: > + asic3_set_gpio_out_a(dev, mask, bitval); > + return; > + case _IPAQ_ASIC3_GPIO_BANK_B: > + asic3_set_gpio_out_b(dev, mask, bitval); > + return; > + case _IPAQ_ASIC3_GPIO_BANK_C: > + asic3_set_gpio_out_c(dev, mask, bitval); > + return; > + case _IPAQ_ASIC3_GPIO_BANK_D: > + asic3_set_gpio_out_d(dev, mask, bitval); > + return; > + } > + > + printk(KERN_ERR "%s: invalid GPIO value 0x%x", __FUNCTION__, gpio); > +} > +EXPORT_SYMBOL(asic3_gpio_set_value); I assume all these debugging printks won't last long. > +int asic3_irq_base(struct device *dev) > +{ > + struct asic3_data *asic = dev->driver_data; > + > + return asic->irq_base; > +} > +EXPORT_SYMBOL(asic3_irq_base); > + > +void asic3_set_led(struct device *dev, int led_num, int duty_time, > + int cycle_time, int timebase) > +{ > + struct asic3_data *asic = dev->driver_data; > + unsigned int led_base; > + > + /* it's a macro thing: see #define _IPAQ_ASIC_LED_0_Base for why you > + * can't substitute led_num in the macros below... > + */ > + > + switch (led_num) { > + case 0: > + led_base = _IPAQ_ASIC3_LED_0_Base; > + break; > + case 1: > + led_base = _IPAQ_ASIC3_LED_1_Base; > + break; > + case 2: > + led_base = _IPAQ_ASIC3_LED_2_Base; > + break; > + default: > + printk(KERN_ERR "%s: invalid led number %d", __FUNCTION__, > + led_num); > + return; > + } > + > + __asic3_write_register(asic, led_base + _IPAQ_ASIC3_LED_TimeBase, > + timebase | LED_EN); > + __asic3_write_register(asic, led_base + _IPAQ_ASIC3_LED_PeriodTime, > + cycle_time); > + __asic3_write_register(asic, led_base + _IPAQ_ASIC3_LED_DutyTime, > + 0); > + udelay(20); /* asic voodoo - possibly need a whole duty cycle? */ > + __asic3_write_register(asic, led_base + _IPAQ_ASIC3_LED_DutyTime, > + duty_time); > +} > + > +EXPORT_SYMBOL(asic3_set_led); Remove the line before EXPORT_SYMBOL(). > +void asic3_set_clock_sel(struct device *dev, u32 bits, u32 val) > +{ > + struct asic3_data *asic = dev->driver_data; > + unsigned long flags; > + u32 v; > + > + spin_lock_irqsave(&asic3_gpio_lock, flags); > + v = __asic3_read_register(asic, IPAQ_ASIC3_OFFSET(CLOCK, SEL)); > + v = (v & ~bits) | val; > + __asic3_write_register(asic, IPAQ_ASIC3_OFFSET(CLOCK, SEL), v); > + spin_unlock_irqrestore(&asic3_gpio_lock, flags); > +} > +EXPORT_SYMBOL(asic3_set_clock_sel); > + > +void asic3_set_clock_cdex(struct device *dev, u32 bits, u32 val) > +{ > + struct asic3_data *asic = dev->driver_data; > + unsigned long flags; > + u32 v; > + > + spin_lock_irqsave(&asic3_gpio_lock, flags); > + v = __asic3_read_register(asic, IPAQ_ASIC3_OFFSET(CLOCK, CDEX)); > + v = (v & ~bits) | val; > + __asic3_write_register(asic, IPAQ_ASIC3_OFFSET(CLOCK, CDEX), v); > + spin_unlock_irqrestore(&asic3_gpio_lock, flags); > +} > +EXPORT_SYMBOL(asic3_set_clock_cdex); > + > +static int asic3_clock_cdex_enable(struct clk *clk, int enable) > +{ > + struct asic3_data *asic = (struct asic3_data *)clk->parent->ctrlbit; > + unsigned long flags, val; > + > + local_irq_save(flags); > + > + val = __asic3_read_register(asic, IPAQ_ASIC3_OFFSET(CLOCK, CDEX)); > + if (enable) > + val |= clk->ctrlbit; > + else > + val &= ~clk->ctrlbit; > + __asic3_write_register(asic, IPAQ_ASIC3_OFFSET(CLOCK, CDEX), val); > + > + local_irq_restore(flags); > + > + return 0; > +} How come asic3_clock_cdex_enable() uses local_irq_save() but the similar-looking functions above use spin_lock_irqsave()? > + > +#define MAX_ASIC_ISR_LOOPS 20 > +#define _IPAQ_ASIC3_GPIO_Base_INCR \ > + (_IPAQ_ASIC3_GPIO_B_Base - _IPAQ_ASIC3_GPIO_A_Base) > + > +static inline void asic3_irq_flip_edge(struct asic3_data *asic, > + u32 base, int bit) > +{ > + u16 edge = __asic3_read_register(asic, > + base + _IPAQ_ASIC3_GPIO_EdgeTrigger); > + edge ^= bit; > + __asic3_write_register(asic, > + base + _IPAQ_ASIC3_GPIO_EdgeTrigger, edge); > +} This function doesn't need the spinlock? > +static void asic3_irq_demux(unsigned int irq, struct irq_desc *desc) > +{ > + int iter; > + struct asic3_data *asic; > + > + /* Acknowledge the parrent (i.e. CPU's) IRQ */ > + desc->chip->ack(irq); > + > + asic = desc->handler_data; > + > + /* printk( KERN_NOTICE "asic3_irq_demux: irq=%d\n", irq ); */ > + for (iter = 0 ; iter < MAX_ASIC_ISR_LOOPS; iter++) { > + u32 status; > + int bank; > + > + status = __asic3_read_register(asic, > + IPAQ_ASIC3_OFFSET(INTR, PIntStat)); > + /* Check all ten register bits */ > + if ((status & 0x3ff) == 0) > + break; > + > + /* Handle GPIO IRQs */ > + for (bank = 0; bank < 4; bank++) { > + if (status & (1 << bank)) { > + unsigned long base, i, istat; > + > + base = _IPAQ_ASIC3_GPIO_A_Base > + + bank * _IPAQ_ASIC3_GPIO_Base_INCR; > + istat = __asic3_read_register(asic, > + base + _IPAQ_ASIC3_GPIO_IntStatus); > + /* IntStatus is write 0 to clear */ > + /* XXX could miss interrupts! */ > + __asic3_write_register(asic, > + base + _IPAQ_ASIC3_GPIO_IntStatus, 0); And neither does this? > + for (i = 0; i < 16; i++) { I hope the magical 16 is meaningful to those who are familiar with the hardware. > + int bit = (1 << i); > + unsigned int irqnr; > + if (!(istat & bit)) > + continue; > + > + irqnr = asic->irq_base > + + (16 * bank) + i; > + desc = irq_desc + irqnr; > + desc->handle_irq(irqnr, desc); > + if (asic->irq_bothedge[bank] & bit) { > + asic3_irq_flip_edge(asic, base, > + bit); > + } > + } > + } > + } > + > + /* Handle remaining IRQs in the status register */ > + { > + int i; > + > + for (i = ASIC3_LED0_IRQ; i <= ASIC3_OWM_IRQ; i++) { > + /* They start at bit 4 and go up */ > + if (status & (1 << (i - ASIC3_LED0_IRQ + 4))) { > + desc = irq_desc + asic->irq_base + i; > + desc->handle_irq(asic->irq_base + i, > + desc); > + } > + } > + } > + > + } > + > + if (iter >= MAX_ASIC_ISR_LOOPS) > + printk(KERN_ERR "%s: interrupt processing overrun\n", > + __FUNCTION__); > +} > + > +static inline int asic3_irq_to_bank(struct asic3_data *asic, int irq) > +{ > + int n; > + > + n = (irq - asic->irq_base) >> 4; > + > + return (n * (_IPAQ_ASIC3_GPIO_B_Base - _IPAQ_ASIC3_GPIO_A_Base)); > +} > + > +static inline int asic3_irq_to_index(struct asic3_data *asic, int irq) > +{ > + return (irq - asic->irq_base) & 15; > +} > + > +static void asic3_mask_gpio_irq(unsigned int irq) > +{ > + struct asic3_data *asic = get_irq_chip_data(irq); > + u32 val, bank, index; > + unsigned long flags; > + > + bank = asic3_irq_to_bank(asic, irq); > + index = asic3_irq_to_index(asic, irq); > + > + spin_lock_irqsave(&asic3_gpio_lock, flags); > + val = __asic3_read_register(asic, bank + _IPAQ_ASIC3_GPIO_Mask); > + val |= 1 << index; > + __asic3_write_register(asic, bank + _IPAQ_ASIC3_GPIO_Mask, val); > + spin_unlock_irqrestore(&asic3_gpio_lock, flags); > +} Locked. > +static void asic3_mask_irq(unsigned int irq) > +{ > + struct asic3_data *asic = get_irq_chip_data(irq); > + int regval; > + > + if (irq < ASIC3_NR_GPIO_IRQS) { > + printk(KERN_ERR "asic3_base: gpio mask attempt, irq %d\n", > + irq); > + return; > + } > + > + regval = __asic3_read_register(asic, > + _IPAQ_ASIC3_INTR_Base + _IPAQ_ASIC3_INTR_IntMask); > + > + switch (irq - asic->irq_base) { > + case ASIC3_LED0_IRQ: > + __asic3_write_register(asic, > + _IPAQ_ASIC3_INTR_Base + _IPAQ_ASIC3_INTR_IntMask, > + regval & ~ASIC3_INTMASK_MASK0); > + break; > + case ASIC3_LED1_IRQ: > + __asic3_write_register(asic, > + _IPAQ_ASIC3_INTR_Base + _IPAQ_ASIC3_INTR_IntMask, > + regval & ~ASIC3_INTMASK_MASK1); > + break; > + case ASIC3_LED2_IRQ: > + __asic3_write_register(asic, > + _IPAQ_ASIC3_INTR_Base + _IPAQ_ASIC3_INTR_IntMask, > + regval & ~ASIC3_INTMASK_MASK2); > + break; > + case ASIC3_SPI_IRQ: > + __asic3_write_register(asic, > + _IPAQ_ASIC3_INTR_Base + _IPAQ_ASIC3_INTR_IntMask, > + regval & ~ASIC3_INTMASK_MASK3); > + break; > + case ASIC3_SMBUS_IRQ: > + __asic3_write_register(asic, > + _IPAQ_ASIC3_INTR_Base + _IPAQ_ASIC3_INTR_IntMask, > + regval & ~ASIC3_INTMASK_MASK4); > + break; > + case ASIC3_OWM_IRQ: > + __asic3_write_register(asic, > + _IPAQ_ASIC3_INTR_Base + _IPAQ_ASIC3_INTR_IntMask, > + regval & ~ASIC3_INTMASK_MASK5); > + break; > + default: > + printk(KERN_ERR "asic3_base: bad non-gpio irq %d\n", irq); > + break; > + } > +} Not locked! Please add a comment to asic3_gpio_lock identifying what resource(s) it protects. > +static void asic3_unmask_gpio_irq(unsigned int irq) sticky space bar. > +{ > + struct asic3_data *asic = get_irq_chip_data(irq); > + u32 val, bank, index; > + unsigned long flags; > + > + bank = asic3_irq_to_bank(asic, irq); > + index = asic3_irq_to_index(asic, irq); > + > + spin_lock_irqsave(&asic3_gpio_lock, flags); > + val = __asic3_read_register(asic, bank + _IPAQ_ASIC3_GPIO_Mask); > + val &= ~(1 << index); > + __asic3_write_register(asic, bank + _IPAQ_ASIC3_GPIO_Mask, val); > + spin_unlock_irqrestore(&asic3_gpio_lock, flags); > +} > > ... > > +static int asic3_gpio_irq_type(unsigned int irq, unsigned int type) > +{ > + struct asic3_data *asic = get_irq_chip_data(irq); > + u32 bank, index; > + unsigned long flags; > + u16 trigger, level, edge, bit; > + > + bank = asic3_irq_to_bank(asic, irq); > + index = asic3_irq_to_index(asic, irq); > + bit = 1<<index; > + > + spin_lock_irqsave(&asic3_gpio_lock, flags); > + level = __asic3_read_register(asic, > + bank + _IPAQ_ASIC3_GPIO_LevelTrigger); > + edge = __asic3_read_register(asic, > + bank + _IPAQ_ASIC3_GPIO_EdgeTrigger); > + trigger = __asic3_read_register(asic, > + bank + _IPAQ_ASIC3_GPIO_TriggerType); > + asic->irq_bothedge[(irq - asic->irq_base) >> 4] &= ~bit; > + > + if (type == IRQT_RISING) { > + trigger |= bit; > + edge |= bit; > + } else if (type == IRQT_FALLING) { > + trigger |= bit; > + edge &= ~bit; > + } else if (type == IRQT_BOTHEDGE) { > + trigger |= bit; > + if (asic3_gpio_get_value(asic->dev, irq - asic->irq_base)) > + edge &= ~bit; > + else > + edge |= bit; > + asic->irq_bothedge[(irq - asic->irq_base) >> 4] |= bit; > + } else if (type == IRQT_LOW) { > + trigger &= ~bit; > + level &= ~bit; > + } else if (type == IRQT_HIGH) { > + trigger &= ~bit; > + level |= bit; > + } else { > + /* > + * if type == IRQT_NOEDGE, we should mask interrupts, but > + * be careful to not unmask them if mask was also called. > + * Probably need internal state for mask. > + */ > + printk(KERN_NOTICE "asic3: irq type not changed.\n"); > + } > + __asic3_write_register(asic, bank + _IPAQ_ASIC3_GPIO_LevelTrigger, > + level); > + __asic3_write_register(asic, bank + _IPAQ_ASIC3_GPIO_EdgeTrigger, > + edge); > + __asic3_write_register(asic, bank + _IPAQ_ASIC3_GPIO_TriggerType, > + trigger); > + spin_unlock_irqrestore(&asic3_gpio_lock, flags); > + return 0; > +} Locking here looks good. > +static struct irq_chip asic3_gpio_irq_chip = { > + .name = "ASIC3-GPIO", > + .ack = asic3_mask_gpio_irq, > + .mask = asic3_mask_gpio_irq, > + .unmask = asic3_unmask_gpio_irq, > + .set_type = asic3_gpio_irq_type, > +}; > + > +static struct irq_chip asic3_irq_chip = { > + .name = "ASIC3", > + .ack = asic3_mask_irq, > + .mask = asic3_mask_irq, > + .unmask = asic3_unmask_irq, > +}; > + > +static void asic3_release(struct device *dev) > +{ > + struct platform_device *sdev = to_platform_device(dev); > + > + kfree(sdev->resource); > + kfree(sdev); > +} > + > +int asic3_register_mmc(struct device *dev) > +{ > + struct platform_device *sdev = kzalloc(sizeof(*sdev), GFP_KERNEL); > + struct tmio_mmc_hwconfig *mmc_config = kmalloc(sizeof(*mmc_config), > + GFP_KERNEL); > + struct platform_device *pdev = to_platform_device(dev); > + struct asic3_data *asic = dev->driver_data; > + struct asic3_platform_data *asic3_pdata = dev->platform_data; > + struct resource *res; > + int rc; > + > + if (sdev == NULL || mmc_config == NULL) > + return -ENOMEM; That'll leak *sdev if *mmc_config==NULL. > + if (asic3_pdata->tmio_mmc_hwconfig) { > + memcpy(mmc_config, asic3_pdata->tmio_mmc_hwconfig, > + sizeof(*mmc_config)); > + } else { > + memset(mmc_config, 0, sizeof(*mmc_config)); > + } > + mmc_config->address_shift = asic->bus_shift; > + > + sdev->id = -1; > + sdev->name = "asic3_mmc"; > + sdev->dev.parent = dev; > + sdev->num_resources = 2; > + sdev->dev.platform_data = mmc_config; > + sdev->dev.release = asic3_release; > + > + res = kzalloc(sdev->num_resources * sizeof(struct resource), > + GFP_KERNEL); > + if (res == NULL) { > + kfree(sdev); > + kfree(mmc_config); > + return -ENOMEM; > + } > + sdev->resource = res; > + > + res[0].start = pdev->resource[2].start; > + res[0].end = pdev->resource[2].end; > + res[0].flags = IORESOURCE_MEM; > + res[1].start = res[1].end = pdev->resource[3].start; > + res[1].flags = IORESOURCE_IRQ; > + > + rc = platform_device_register(sdev); > + if (rc) { > + printk(KERN_ERR "asic3_base: " > + "Could not register asic3_mmc device\n"); > + kfree(res); > + kfree(sdev); kfree(mmc_config); ? > + return rc; > + } > + > + asic->mmc_dev = sdev; > + > + return 0; > +} > +EXPORT_SYMBOL(asic3_register_mmc); > + > +int asic3_unregister_mmc(struct device *dev) > +{ > + struct asic3_data *asic = dev->driver_data; > + platform_device_unregister(asic->mmc_dev); > + asic->mmc_dev = 0; > + > + return 0; > +} > +EXPORT_SYMBOL(asic3_unregister_mmc); > + > > ... > > + for (i = 0 ; i < ASIC3_NR_IRQS ; i++) { Use for (i = 0; i < ASIC3_NR_IRQS; i++) { > + for (i = 0 ; i < ASIC3_NR_IRQS ; i++) { Ditto (check all patches) (soon we'll have a script to do this) (hopefully) > + int irq = i + asic->irq_base; > + set_irq_flags(irq, 0); > + set_irq_handler (irq, NULL); > + set_irq_chip (irq, NULL); > + set_irq_chip_data(irq, NULL); > + } > + > + set_irq_chained_handler(asic->irq_nr, NULL); > + } > + > + if (asic->mmc_dev) > + asic3_unregister_mmc(&pdev->dev); > + > + for (i = 0; i < ARRAY_SIZE(asic3_clocks); i++) > + clk_unregister(&asic3_clocks[i]); > + clk_unregister(&clk_g); > + > + __asic3_write_register(asic, IPAQ_ASIC3_OFFSET(CLOCK, SEL), 0); > + __asic3_write_register(asic, IPAQ_ASIC3_OFFSET(INTR, IntMask), 0); > + > + iounmap(asic->mapping); > + > + kfree(asic); > + > + return 0; > +} > > ... > > +static int asic3_suspend(struct platform_device *pdev, pm_message_t state) > +{ > + struct asic3_data *asic = platform_get_drvdata(pdev); > + suspend_cdex = __asic3_read_register(asic, > + _IPAQ_ASIC3_CLOCK_Base + _IPAQ_ASIC3_CLOCK_CDEX); > + /* The LEDs are still active during suspend */ > + __asic3_write_register(asic, > + _IPAQ_ASIC3_CLOCK_Base + _IPAQ_ASIC3_CLOCK_CDEX, > + suspend_cdex & ASIC3_SUSPEND_CDEX_MASK); > + return 0; > +} > + > +static int asic3_resume(struct platform_device *pdev) > +{ > + struct asic3_data *asic = platform_get_drvdata(pdev); > + unsigned short intmask; > + > + __asic3_write_register(asic, IPAQ_ASIC3_OFFSET(CLOCK, CDEX), > + suspend_cdex); > + > + if (asic->irq_nr != -1) { > + /* Toggle the interrupt mask to try to get ASIC3 to show > + * the CPU an interrupt edge. For more details see the > + * kernel-discuss thread around 13 June 2005 with the > + * subject "asic3 suspend / resume". */ > + intmask = __asic3_read_register(asic, > + IPAQ_ASIC3_OFFSET(INTR, IntMask)); > + __asic3_write_register(asic, IPAQ_ASIC3_OFFSET(INTR, IntMask), > + intmask & ~ASIC3_INTMASK_GINTMASK); > + mdelay(1); > + __asic3_write_register(asic, IPAQ_ASIC3_OFFSET(INTR, IntMask), > + intmask | ASIC3_INTMASK_GINTMASK); > + } > + > + return 0; > +} > + > +static struct platform_driver asic3_device_driver = { > + .driver = { > + .name = "asic3", > + }, > + .probe = asic3_probe, > + .remove = asic3_remove, Should .remove be __devexit_p()? > + .suspend = asic3_suspend, > + .resume = asic3_resume, > + .shutdown = asic3_shutdown, > +}; Does this driver have a Kconfig dependency upon CONFIG_PM? If not, you should support CONFIG_PM=n. The typical way of doing that is #ifdef CONFIG_PM static int asic3_suspend(struct platform_device *pdev, pm_message_t state) { ... } static int asic3_resume(struct platform_device *pdev) { ... } #else #define asic3_suspend NULL #define asic3_resume NULL #endif > +static int __init asic3_base_init(void) > +{ > + int retval = 0; > + retval = platform_driver_register(&asic3_device_driver); > + return retval; > +} > + > +static void __exit asic3_base_exit(void) > +{ > + platform_driver_unregister(&asic3_device_driver); > +} > + > +#ifdef MODULE > +module_init(asic3_base_init); > +#else /* start early for dependencies */ > +subsys_initcall(asic3_base_init); > +#endif hm, I'd expect that subsys_initcall() from within a module will do the right thing, in which case the ifdef isn't needed. I certainly hope that's the case. > +module_exit(asic3_base_exit); > > +MODULE_LICENSE("GPL"); > +MODULE_AUTHOR("Phil Blundell <[email protected]>"); > +MODULE_DESCRIPTION("Core driver for HTC ASIC3"); > +MODULE_SUPPORTED_DEVICE("asic3"); > diff --git a/include/linux/soc/asic3_base.h b/include/linux/soc/asic3_base.h > new file mode 100644 > index 0000000..f17acda > --- /dev/null > +++ b/include/linux/soc/asic3_base.h > @@ -0,0 +1,100 @@ > +#include <asm/types.h> > + > +/* Private API - for ASIC3 devices internal use only */ > +#define HDR_IPAQ_ASIC3_ACTION(ACTION,action,fn,FN) \ > +u32 asic3_get_gpio_ ## action ## _ ## fn (struct device *dev); \ > +void asic3_set_gpio_ ## action ## _ ## fn (struct device *dev, u32 bits, u32 val); > + > +#define HDR_IPAQ_ASIC3_FN(fn,FN) \ > + HDR_IPAQ_ASIC3_ACTION ( MASK,mask,fn,FN) \ > + HDR_IPAQ_ASIC3_ACTION ( DIR, dir, fn, FN) \ > + HDR_IPAQ_ASIC3_ACTION ( OUT, out, fn, FN) \ > + HDR_IPAQ_ASIC3_ACTION ( LEVELTRI, trigtype, fn, FN) \ > + HDR_IPAQ_ASIC3_ACTION ( RISING, rising, fn, FN) \ > + HDR_IPAQ_ASIC3_ACTION ( LEVEL, triglevel, fn, FN) \ > + HDR_IPAQ_ASIC3_ACTION ( SLEEP_MASK, sleepmask, fn, FN) \ > + HDR_IPAQ_ASIC3_ACTION ( SLEEP_OUT, sleepout, fn, FN) \ > + HDR_IPAQ_ASIC3_ACTION ( BATT_FAULT_OUT, battfaultout, fn, FN) \ > + HDR_IPAQ_ASIC3_ACTION ( INT_STATUS, intstatus, fn, FN) \ > + HDR_IPAQ_ASIC3_ACTION ( ALT_FUNCTION, alt_fn, fn, FN) \ > + HDR_IPAQ_ASIC3_ACTION ( SLEEP_CONF, sleepconf, fn, FN) \ > + HDR_IPAQ_ASIC3_ACTION ( STATUS, status, fn, FN) s/ (/(/g > +struct tmio_mmc_hwconfig; > + > +struct asic3_platform_data > +{ struct asic3_platform_data { (review whole patchset) > + struct { > + u32 dir; > + u32 init; > + u32 sleep_mask; > + u32 sleep_out; > + u32 batt_fault_out; > + u32 sleep_conf; > + u32 alt_function; > + } gpio_a, gpio_b, gpio_c, gpio_d; > + > + int irq_base; > + unsigned int bus_shift; > + > + struct platform_device **child_platform_devs; > + int num_child_platform_devs; > + > + struct tmio_mmc_hwconfig *tmio_mmc_hwconfig; > +}; > diff --git a/include/linux/soc/tmio_mmc.h b/include/linux/soc/tmio_mmc.h > new file mode 100644 > index 0000000..b8c407c > --- /dev/null > +++ b/include/linux/soc/tmio_mmc.h > @@ -0,0 +1,17 @@ > +#include <linux/platform_device.h> > + > +#define MMC_CLOCK_DISABLED 0 > +#define MMC_CLOCK_ENABLED 1 > + > +#define TMIO_WP_ALWAYS_RW ((void*)-1) > + > +struct tmio_mmc_hwconfig { > + void (*hwinit)(struct platform_device *sdev); > + void (*set_mmc_clock)(struct platform_device *sdev, int state); > + > + /* NULL - use ASIC3 signal, > + TMIO_WP_ALWAYS_RW - assume always R/W (e.g. miniSD) > + otherwise - machine-specific handler */ > + int (*mmc_get_ro)(struct platform_device *pdev); > + short address_shift; > +}; - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ \    \ /   Last update: 2007-05-01 09:01    [W:0.099 / U:35.732 seconds] ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site  
__label__pos
0.941505
  Nest a function within a function Nested functions use a function as one of the arguments of another function. You can nest up to 64 levels of functions. The following formula sums a set of numbers (G2:G5) only if the average of another set of numbers (F2:F5) is greater than 50. Otherwise it returns 0. Nested functions Callout 1 The AVERAGE and SUM functions are nested within the IF function. 1. Click the cell in which you want to enter the formula. 2. To start the formula with the function, click Function Wizard Button image on the formula bar (formula bar: A bar at the top of the Excel window that you use to enter or edit values or formulas in cells or charts. Displays the constant value or formula stored in the active cell.) Formula bar. 3. Select the function you want to use. You can enter a question that describes what you want to do in the Search for a function box (for example, "add numbers" returns the SUM function), or browse from the categories in the Or Select a category box. 4. Enter the arguments (argument: The values that a function uses to perform operations or calculations. The type of argument a function uses is specific to the function. Common arguments that are used within functions include numbers, text, cell references, and names.). • To enter cell references as an argument, click Collapse Dialog Button image next to the argument that you want (which temporarily hides the dialog box), select the cells on the worksheet, and then click Expand Dialog Button image. • To enter another function as an argument, enter the function in the argument box that you want. For example, you can add SUM(G2:G5) in the Value_if_true edit box of the IF function. • The parts of the formula displayed in the Function Arguments dialog box reflect the function that you selected in the previous step. For example, if you clicked IF, Function arguments displays the arguments for the IF function. Top of Page Top of Page     Applies to: Excel 2010  
__label__pos
0.991054
Method: organizations.securityAssessmentResults.batchCompute Compute RAV2 security scores for a set of resources. HTTP request POST https://apigee.googleapis.com/v1/{name=organizations/*/securityAssessmentResults}:batchCompute The URL uses gRPC Transcoding syntax. Path parameters Parameters name string Required. Name of the organization for which the score needs to be computed in the following format: organizations/{org}/securityAssessmentResults Request body The request body contains data with the following structure: JSON representation { "profile": string, "scope": string, "pageSize": integer, "pageToken": string, // Union field resources can be only one of the following: "includeAllResources": { object (IncludeAll) }, "include": { object (ResourceArray) } // End of list of possible types for union field resources. } Fields profile string Required. Name of the profile that is used for computation. scope string Required. Scope of the resources for the computation. For Apigee, the environment is the scope of the resources. pageSize integer Optional. The maximum number of results to return. The service may return fewer than this value. If unspecified, at most 50 results will be returned. pageToken string Optional. A page token, received from a previous securityAssessmentResults.batchCompute call. Provide this to retrieve the subsequent page. Union field resources. REQUIRED resources can be only one of the following: includeAllResources object (IncludeAll) Include all resources under the scope. include object (ResourceArray) Include only these resources. Response body Response for securityAssessmentResults.batchCompute. If successful, the response body contains data with the following structure: JSON representation { "securityAssessmentResults": [ { object (SecurityAssessmentResult) } ], "assessmentTime": string, "nextPageToken": string } Fields securityAssessmentResults[] object (SecurityAssessmentResult) Default sort order is by resource name in alphabetic order. assessmentTime string (Timestamp format) The time of the assessment api call. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". nextPageToken string A token that can be sent as pageToken to retrieve the next page. If this field is blank, there are no subsequent pages. Authorization scopes Requires the following OAuth scope: • https://www.googleapis.com/auth/cloud-platform IncludeAll This type has no fields. Message for include_all option. ResourceArray An array of resource messages. JSON representation { "resources": [ { object (Resource) } ] } Fields resources[] object (Resource) Required. The array of resources. For Apigee, the proxies are resources. Resource Resource for which we are computing security assessment. JSON representation { "type": enum (ResourceType), "name": string } Fields type enum (ResourceType) Required. Type of this resource. name string Required. Name of this resource. ResourceType Type of the resource Enums RESOURCE_TYPE_UNSPECIFIED ResourceType not specified. API_PROXY Resource is an Apigee Proxy. SecurityAssessmentResult The security assessment result for one resource. JSON representation { "resource": { object (Resource) }, "createTime": string, // Union field result can be only one of the following: "scoringResult": { object (ScoringResult) }, "error": { object (Status) } // End of list of possible types for union field result. } Fields resource object (Resource) The assessed resource. createTime string (Timestamp format) The time of the assessment of this resource. This could lag behind assessmentTime due to caching within the backend. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". Union field result. result can be only one of the following: scoringResult object (ScoringResult) The result of the assessment. error object (Status) The error status if scoring fails. Resource Resource for which we are computing security assessment. JSON representation { "type": enum (ResourceType), "name": string, "resourceRevisionId": string } Fields type enum (ResourceType) Required. Type of this resource. name string Required. Name of this resource. resourceRevisionId string The revision id for the resource. In case of Apigee, this is proxy revision id. ResourceType Type of the resource Enums RESOURCE_TYPE_UNSPECIFIED ResourceType not specified. API_PROXY Resource is an Apigee Proxy. ScoringResult The result of the assessment. JSON representation { "score": integer, "severity": enum (Severity), "failedAssessmentPerWeight": { string: integer, ... }, "assessmentRecommendations": { string: { object (AssessmentRecommendation) }, ... }, "dataUpdateTime": string } Fields score integer The security score of the assessment. severity enum (Severity) The severity of the assessment. failedAssessmentPerWeight map (key: string, value: integer) The number of failed assessments grouped by its weight. Keys are one of the following: "MAJOR", "MODERATE", "MINOR". An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. assessmentRecommendations map (key: string, value: object (AssessmentRecommendation)) The recommendations of the assessment. The key is the "name" of the assessment (not displayName), and the value are the recommendations. An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. dataUpdateTime string (Timestamp format) The time when resource data was last fetched for this resource. This time may be different than when the resource was actually updated due to lag in data collection. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". Severity The severity definition. Enums SEVERITY_UNSPECIFIED Severity is not defined. LOW Severity is low. MEDIUM Severity is medium. HIGH Severity is high. MINIMAL Severity is minimal AssessmentRecommendation The message format of a recommendation from the assessment. JSON representation { "displayName": string, "weight": enum (Weight), "scoreImpact": integer, "verdict": enum (Verdict), "recommendations": [ { object (Recommendation) } ] } Fields displayName string The display name of the assessment. weight enum (Weight) The weight of the assessment which was set in the profile. scoreImpact integer Score impact indicates the impact on the overall score if the assessment were to pass. verdict enum (Verdict) Verdict indicates the assessment result. recommendations[] object (Recommendation) The recommended steps of the assessment. Weight The assessment weight of a assessment within the profile. Enums WEIGHT_UNSPECIFIED The weight is unspecified. MINOR The weight is minor. MODERATE The weight is moderate. MAJOR The weight is major. Verdict Verdict indicates the assessment result. Enums VERDICT_UNSPECIFIED The verdict is unspecified. PASS The assessment has passed. FAIL The assessment has failed. Recommendation The format of the assessment recommendation. JSON representation { "description": string, "link": { object (Link) } } Fields description string The description of the recommendation.
__label__pos
0.855952
Insights What Is Jira in Web Development? Jira is a bug-tracking and project management tool used extensively by software developers. It is developed by Atlassian, a Australian company. Jira is written in Java and offers both cloud-based and on-premises deployment options. Jira is used for issue tracking and project management by organizations across a wide range of industries. It helps developers to plan, track, and release software products efficiently. Jira also offers features like agile reporting, time tracking, custom workflows, etc. that help developers to manage their work effectively. Jira is a popular tool among developers because it is easy to use and offers a wide range of features. However, one of the drawbacks of Jira is that it can be complex to configure for some users. What Is Jira in Web Development? Jira is a popular tool among web developers because it is easy to use and offers a wide range of features.
__label__pos
0.867846
Where we learn technology How to control Chromedriver using curl How to control Chromedriver using curl Here is how to use Chromedriver without libraries like selenium-webdriver. This can be useful for debugging. The following example visits a web page and reads the a headline’s text contents. 1. download chromedriver.exe on windows/mac, go to the same directory and run this command: On Mac: ./chromedriver & On Windows: chromedriver.exe 2. Create a session: Java Code: WebDriver driver = new ChromeDriver(); Curl Command: curl -XPOST http://localhost:9515/session -d ‘{“desiredCapabilities”:{“browserName”:”chrome”}}’ 3. launch a URL: Java Code: driver.get(“https://www.google.com”); Curl Command: curl http://localhost:9515/session/142a7f9bb57b8fda48636c8709df9591/url -d ‘{“url”:”https://www.google.com”}’ 4. find an element: Java Code: WebElement element = driver.findElement(By.name(“q”)); Curl Command: curl http://localhost:9515/session/142a7f9bb57b8fda48636c8709df9591/element -d ‘{“using”:”name”, “value”:”q”}’ 5. enter text in element: Java Code: element.sendKeys(“Naveen AutomationLabs”); Curl Command: curl http://localhost:9515/session/142a7f9bb57b8fda48636c8709df9591/element/0.45843488917986774-1/value -d ‘{“value”:[“Naveen Automation Labs”]}’ 6. Quit Browser/close the session: Java Code: driver.quit(); Curl Command: curl -X DELETE http://localhost:9515/session/142a7f9bb57b8fda48636c8709df9591 Cheers!! Naveen AutomationLabs 1 Comment 1. Navinika Wow this is really amazing post. Thanks for sharing the useful informative data. I appreciate your difficulty work. Keep blogging. Protractor Training in Electronic City Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.844343
Login | Register    LinkedIn Google+ Twitter RSS Feed Download our iPhone app TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK Browse DevX Sign up for e-mail newsletters from DevX advertisement   Manage VB6 Code Complexity with the State Behavior Pattern VB6 can be prone to disorganization and the State behavior pattern is a consummate organizer. Use it proactively to prevent spaghetti code or reactively to manage code complexity. advertisement o learn how to do something well, following in the footsteps of those that have successfully done it before can be a useful practice. Think of the age old master/apprentice system, which works well and is still used today. In medicine, for example, the chief resident can be considered a master and the intern the apprentice. This principal is especially effective—yet consistently ignored—in software development. Many developers, working in isolation with little or no budget for books, are forced to recreate what has already been created and re-learn what is already known. Fortunately, masters are available to those programmers who will apprentice themselves. Patterns and anti-patterns are a body of work that can play the role of master. Patterns are general blueprints for solutions to known problems. These blueprints, when implemented correctly and applied to the correct kind of problem, have proved effective. Anti-patterns are the opposite: solutions that are known to fail. Patterns currently come in three flavors: creational, structural, and behavioral: • Creational patterns deal with object creation. • Structural patterns deal with class and object composition. • Behavioral patterns address object interaction and the distribution of responsibility. Because patterns are evolving and still being discovered, very few are experts in them. However, the number of pattern practitioners is increasing, and several excellent references on patterns are available today. (A must-have for every practitioner is Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, et. al.) In fact, many patterns—and more recently anti-patterns—are well documented, and you can easily find many successful examples. This article discusses a pattern that is both easy to implement and easy to use in Visual Basic 6, the State pattern. VB6 can be prone to disorganization and the State pattern, a behavioral pattern, is a consummate organizer. Dynamic Object Reclassification Typically, problems arise when a form tries to do too much. For example, suppose you can interact with a form in the following states: browse mode, edit mode, and administrative mode. Generally, you end up writing a lot of conditional code that depends on some flag—let's say a mode flag for argument's sake—and uses a lot of conditional checks to turn controls on and off and permit editing and updating based on the state of this flag. The result is generally spaghetti code of varying disorganization. Now, I am not impugning anybody's dedication or desire. What I am saying is that requirements typically evolve and change, and complexity insidiously winds its way into code. Proactively using the State behavior pattern can prevent behavioral complexity, and reactively using it can eliminate such complexity once it rears its ugly head. The basic idea is that a particular class represents a context. A good example is a form. The form is coded in such a way that a state object, an interface, defines its behavior. A behavior class implements the interface for each state possibility. By dynamically changing the instance of the state object, one changes the behavior of the context—in this example, the behavior of the form. This is referred to as dynamic reclassification. Comment and Contribute           (Maximum characters: 1200). You have 1200 characters left.     Sitemap
__label__pos
0.53241
A Guide to launchSettings.json in ASP.NET If you’re building a web application using ASP.NET Core, you may have come across the file launchSettings.json. This file is used to configure how your application is launched during development. Since it comes as part of the default templates, most of us simply overlook it. While we may never have to touch this file in most cases, it is good to understand its purpose so that we can make adjustments to it, if such a need ever arises. In this post, we’ll take a closer look at it and demystify its inner workings. What is launchSettings.json? launchSettings.json is a configuration file used by ASP.NET Core to specify how an application is launched. It contains information such as the command to start the application, the environment variables to set, and the ports to use. This file is used by Visual Studio and the .NET Core CLI when launching the application during development. Understanding the structure of launchSettings.json Let’s take a look at the structure of launchSettings.json. Here’s an example: { "profiles": { "MyApplication": { "commandName": "Project", "launchBrowser": true, "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" }, "applicationUrl": "http://localhost:5000/" } } } The profiles property contains a collection of named profiles. In this example, there is only one profile called MyApplication. Each profile has a set of properties that specify how the application should be launched. The commandName property specifies how the application should be started. In this case, the value is Project, which means that the application should be started using the project’s output assembly. The launchBrowser property specifies whether the default browser should be launched when the application is started. If this property is set to true, the browser will be launched automatically. The environmentVariables property contains a collection of environment variables that should be set when the application is started. In this example, the ASPNETCORE_ENVIRONMENT variable is set to Development. The applicationUrl property specifies the URL that the application should be accessible from. In this case, the URL is http://localhost:5000/. Using launchSettings.json for App Initialization launchSettings.json is used by Visual Studio and the .NET Core CLI when launching the application during development. When you run your application in Visual Studio, it will use the launchSettings.json file to determine how to launch the application. You can also use launchSettings.json when running your application from the command line using the dotnet run command. By default, dotnet run will use the Development environment, but you can specify a different environment using the ASPNETCORE_ENVIRONMENT environment variable. dotnet run --environment Production Closing Remarks This file should be committed to source control, along with the rest of our code. It allows team members to utilize the same launch settings across the team. While we may never have to touch it, we have learned how the framework and tools like VSCode and Visual Studio use it to launch your application. You may have wondered how your browser opened up with your dotnet site when you issued a dotnet run at the terminal. I hope this post shed some light on how some of those things are wired up. Leave a Comment Your email address will not be published. Required fields are marked *
__label__pos
0.697925
Completed Last Updated: 05 Oct 2016 14:00 by ADMIN ADMIN Hristo Created on: 27 Sep 2016 13:21 Category: GridView Type: Bug Report 3 FIX. RadGridView - when copied the date values should respect the format string property set on the GridViewDateTimeColumn Workaround: public partial class Form1 : Form { public Form1() { InitializeComponent(); this.radGridView1.DataSource = this.GetData(); this.radGridView1.AutoSizeColumnsMode = GridViewAutoSizeColumnsMode.Fill; ((GridViewDateTimeColumn)this.radGridView1.Columns["Date"]).FormatString = "{0: yyyy-MM-dd hh:mm:ss.fff tt}"; } private DataTable GetData() { DataTable dt = new DataTable(); dt.Columns.Add("Id", typeof(int)); dt.Columns.Add("Name", typeof(string)); dt.Columns.Add("Date", typeof(DateTime)); dt.Columns.Add("Bool", typeof(bool)); for (int i = 0; i < 100; i++) { dt.Rows.Add(i, "Name " + i, DateTime.Now.AddDays(i), i % 2 == 0); } return dt; } } public class MyRadGridView : RadGridView { public override string ThemeClassName { get { return typeof(RadGridView).FullName; } } protected override RadGridViewElement CreateGridViewElement() { return new MyRadGridViewElement(); } } public class MyRadGridViewElement : RadGridViewElement { protected override Type ThemeEffectiveType { get { return typeof(MyRadGridViewElement); } } protected override MasterGridViewTemplate CreateTemplate() { return new MyMasterGridViewTemplate(); } } public class MyMasterGridViewTemplate : MasterGridViewTemplate { public override void Copy() { base.Copy(); GridViewCellInfo[] cells = null; if (this.SelectionMode == GridViewSelectionMode.CellSelect) { cells = new GridViewCellInfo[this.SelectedCells.Count]; this.SelectedCells.CopyTo(cells, 0); } else if (this.SelectionMode == GridViewSelectionMode.FullRowSelect) { GridViewDataRowInfo row = this.SelectedRows[0] as GridViewDataRowInfo; if (this.SelectedRows.Count == 1 && row.ViewTemplate.CurrentColumn != null) { cells = new GridViewCellInfo[row.Cells.Count]; for (int i = 0; i < row.Cells.Count; i++) { cells[i] = row.Cells[i]; } } } if (Clipboard.GetData(DataFormats.Text) != null) { string data = Clipboard.GetData(DataFormats.Text).ToString(); if (data != string.Empty && cells != null) { var values = data.Split(new char[] { '\t' }, StringSplitOptions.RemoveEmptyEntries); StringBuilder sb = new StringBuilder(); foreach (string value in values) { DateTime date; if (DateTime.TryParse(value, out date)) { string baseFormat = "yyyy-MM-dd HH:mm tt"; foreach (var cell in cells) { if (cell.ColumnInfo is GridViewDateTimeColumn && ((DateTime)cell.Value).ToString(baseFormat) == date.ToString(baseFormat)) { sb.Append(string.Format(((GridViewDateTimeColumn)cell.ColumnInfo).FormatString, cell.Value) + "\t"); break; } } } else { sb.Append(value + "\t"); } } Clipboard.Clear(); Clipboard.SetData(DataFormats.Text, sb.ToString()); } } } } 0 comments
__label__pos
0.999644
DI, dependency-injection, is a design pattern where dependencies (instances of objects, properties) of a component are set through the constructor(s), methods or fields (properties) learn more… | top users | synonyms 213 votes 13answers 49k views So Singletons are bad, then what? There has been a lot of discussion lately about the problems with using (and overusing) Singletons. I've been one of those people earlier in my career too. I can see what the problem is now, and yet, ... 71 votes 6answers 59k views What does the Spring framework do? Should I use it? Why or why not? [closed] So, I'm starting a brand-new project in Java, and am considering using Spring. Why am I considering Spring? Because lots of people tell me I should use Spring! Seriously, any time I've tried to get ... 81 votes 4answers 22k views Difference between Dependency Injection (DI) & Inversion of Control (IOC) I've been seeing a lot of references of Dependency Injection (DI) & Inversion Of Control (IOC), but I don't really know if there is a difference between them or not. I would like to start using ... 17 votes 4answers 2k views Does “Inversion of Control” promote “Anemic Domain Model”? When I used IoC Container in my last project, I ended up with anemic entities and most of my business logic in Stateless Services. I have seen projects written by other developers that utilize ... 81 votes 18answers 8k views Dependency injection: How to sell it Let it be known that I am a big fan of dependency injection (DI) and automated testing. I could talk all day about it. Background Recently, our team just got this big project that is to built from ... 41 votes 10answers 4k views (Why) is it important that a unit test not test dependencies? I understand the value of automated testing and use it wherever the problem is well-specified enough that I can come up with good test cases. I've noticed, though, that some people here and on ... 39 votes 5answers 5k views When is it not appropriate to use the dependency injection pattern? Since learning (and loving) automated testing I have found myself using the dependency injection pattern in almost every project. Is it always appropriate to use this pattern when working with ... 25 votes 6answers 4k views Should I use Dependency Injection or static factories? When designing a system I am often faced with the problem of having a bunch of modules (logging, database acces, etc) being used by the other modules. The question is, how do I go about providing ... 12 votes 5answers 4k views Dependency injection ; good practices to reduce boilerplate code I have a simple question, and I'm not even sure it has an answer but let's try. I'm coding in C++, and using dependancy injection to avoid global state. This works quite well, and I don't run in ... 12 votes 1answer 2k views Domain-Driven-Design - external dependencies in the Entity problem I'd like to start Domain-Driven-Design, but there are several problems I'd like to solve before starting :) Let's imagine I have a Groups and Users and when user wants to join a group, I'm calling ... 8 votes 2answers 4k views Dependency injection with n-tier Entity Framework solution I am currently designing an n-tier solution which is using Entity Framework 5 (.net 4) as its data access strategy, but am concerned about how to incorporate dependency injection to make it testable / ... 6 votes 4answers 3k views Use Dependency Injection For Data Objects? I'm just learning about dependency injection, and am stuck on something. Dependency Injection recommends sending dependent classes through the constructor, but I'm wondering if this is necessary for ... 14 votes 3answers 2k views Sell me on IoC containers, please I've seen several recommend use of IoC containers in code. The motivation is simple. Take the following dependency injected code: class UnitUnderTest { std::auto_ptr<Dependency> d_; public: ... 2 votes 2answers 149 views Injecting dependencies (DI) in c++ applications I am playing with dependency injection, but i am not sure I am doing it right. Especially, I am not sure what should be the correct way to build classes with injected dependencies. Say I have a class ... 2 votes 1answer 263 views Customizing configuration with Dependency Injection I'm designing a small application infrastructure library, aiming to simplify development of ASP.NET MVC based applications. Main goal is to enforce convention over configuration. Hovewer, I still ... 1 vote 2answers 84 views Initialization of objects in a system using dependency injection This is a follow up question to the following post: Injecting dependencies (DI) in c++ applications In a system that uses DI, someone, somewhere should be responsible to create the various objects ...
__label__pos
0.560008
Display custom value on HA dashboard I am trying to display a time value, only known in one of my Apps, on the HA dashboard. Seems like a simple thing, but can’t really find a way to do this. Perhaps with set_state (but not supposed to use this), but then on what kind of entity? Yes, set_state is the way to go. Just create a sensor with self.set_state("sensor.my_time_value", state = StringWithYourTimeValue). The sensor will not be persistent meaning that whenever you restart Home Assistant, the sensor will be gone and your app will have to create the sensor again. Thanks, you put me on the right track, using the “cat method” now: thats a really old thread with a method i used before i really discovered the possibilities from set_state. a lot easier is it this way: time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:$S") self.set_state("sensor.any_name_you_like",state=time) the first time the set_state gets called HA automaticly creates a sensor and then it gets updated when the set_state gets called again. it is better because with the “cat method” HA keeps checking the file regularly (so more load) but you still get a lag before the sensor is updated. with set_state, the sensor is updated immediatly. Hmm right, tomas suggested that above, but I was getting an error. Tried again now, and seems the sensor is created in HA but Appdaemon still errors on it? 2018-06-04 13:03:00.381467 WARNING AppDaemon: Roxy: Entity sensor.roxy_ago not found in AppDaemon Well, it’s a warning, but still…odd? the warning do you get 1 time. it tells you that the entity doesnt exist. at the time that andrew created the set_state function he wasnt aware of the possibility to use it to create sensors. so the warning was a general one to let you notice that you use an unknown entity. i theory there should be a function create_sensor that doesnt generate the warning, but just to get rid of a warning that would be a bit to much. still, remember that you get the warning every time you have restarted HA. because with restarting HA the sensor dissapears untill you set the state the first time again. i use this default_state = "anything" if not self.entity_exists("sensor.my_sensor"): self.set_state(“sensor.my_sensor”,state=default_state) in the initialise from an app with priority 1 so that all my sensors get set as soon as i start AD. you can also set atrributes like friendly name like this: attributes = {“friendly_name”: “your friendly name”} Thanks for the info! All working now, great. 1 Like
__label__pos
0.959678
My mytake keeps glitching, can an admin help? My mytake keeps switching to sexual behavior, when I'm trying to write about books and writing. How can this be fixed? 0|0 3|3 Most Helpful Girl What Guys Said 3 • 0|0 0|0 • That happens if you use some keyword that marks it as "Sexual Behavior". For example, the word "penis" is one of those words. There are other words too that qualify for the same category. 0|0 0|0 • It's done the same with me before on my recipe mytakes and my political mytakes, you have to let an admin know about it 0|0 0|0 What Girls Said 2 • apparently your too sexy to be anything other then sexual behavior 0|0 0|0 • Contact the Admins directly to let them know the problem you're having. http://www.girlsaskguys.com/contact 0|0 0|0 Loading...
__label__pos
0.945026
Find & Replace The string REPLACE() function can be used to find and replace a text string within a table.   Syntax: REPLACE(text_string, from_string, to_string)   Which can be used with the UPDATE command as follows: UPDATE TABLE_NAME SET FIELD_NAME = REPLACE(FIELD_NAME, ‘find this string’, ‘replace found string with this string)   e.g. UPDATE wp_8_posts SET post_content = REPLACE(post_content, ‘advcpp‘, ‘intcpp) this example will UPDATE all records in the wp_8_posts table where the post_content field contains advcpp and replace it with intcpp.     Another example:   SELECT REPLACE(‘www.tech-academy.co.uk’, ‘w’, ‘X’)   Which would result in ‘XXX.tech-academy.com’ Leave a Reply
__label__pos
0.780943
Better timing Dependencies:   FP MQTTPacket Fork of MQTT by MQTT MQTTClient.h Committer: icraggs Date: 2014-04-14 Revision: 16:91c2f9a144d4 Parent: 15:64a57183aa03 Child: 19:57f6f976e878 File content as of revision 16:91c2f9a144d4: /******************************************************************************* * Copyright (c) 2014 IBM Corp. * * All rights reserved. This program and the accompanying materials * are made available under the terms of the Eclipse Public License v1.0 * and Eclipse Distribution License v1.0 which accompany this distribution. * * The Eclipse Public License is available at * http://www.eclipse.org/legal/epl-v10.html * and the Eclipse Distribution License is available at * http://www.eclipse.org/org/documents/edl-v10.php. * * Contributors: * Ian Craggs - initial API and implementation and/or initial documentation *******************************************************************************/ #if !defined(MQTTCLIENT_H) #define MQTTCLIENT_H #include "FP.h" #include "MQTTPacket.h" #include "stdio.h" namespace MQTT { enum QoS { QOS0, QOS1, QOS2 }; struct Message { enum QoS qos; bool retained; bool dup; unsigned short id; void *payload; size_t payloadlen; }; class PacketId { public: PacketId(); int getNext(); private: static const int MAX_PACKET_ID = 65535; int next; }; typedef void (*messageHandler)(Message*); typedef struct limits { int MAX_MQTT_PACKET_SIZE; // int MAX_MESSAGE_HANDLERS; // each subscription requires a message handler int MAX_CONCURRENT_OPERATIONS; // each command which runs concurrently can have a result handler, when we are in multi-threaded mode int command_timeout; limits() { MAX_MQTT_PACKET_SIZE = 100; MAX_MESSAGE_HANDLERS = 5; MAX_CONCURRENT_OPERATIONS = 1; // 1 indicates single-threaded mode - set to >1 for multithreaded mode command_timeout = 30; } } Limits; template<class Network, class Timer, class Thread, class Mutex> class Client { public: struct Result { /* success or failure result data */ Client<Network, Timer, Thread, Mutex>* client; int connack_rc; }; typedef void (*resultHandler)(Result*); Client(Network* network, const Limits limits = Limits()); int connect(MQTTPacket_connectData* options = 0, resultHandler fn = 0); template<class T> int connect(MQTTPacket_connectData* options = 0, T *item = 0, void(T::*method)(Result *) = 0); // alternative to pass in pointer to member function int publish(const char* topic, Message* message, resultHandler rh = 0); int subscribe(const char* topicFilter, enum QoS qos, messageHandler mh, resultHandler rh = 0); int unsubscribe(const char* topicFilter, resultHandler rh = 0); int disconnect(int timeout, resultHandler rh = 0); void run(void const *argument); private: int cycle(int timeout); int waitfor(int packet_type, Timer& atimer); int keepalive(); int findFreeOperation(); int decodePacket(int* value, int timeout); int readPacket(int timeout); int sendPacket(int length, int timeout); int deliverMessage(MQTTString* topic, Message* message); Thread* thread; Network* ipstack; Limits limits; char* buf; char* readbuf; Timer ping_timer, connect_timer; unsigned int keepAliveInterval; bool ping_outstanding; PacketId packetid; typedef FP<void, Result*> resultHandlerFP; resultHandlerFP connectHandler; typedef FP<void, Message*> messageHandlerFP; struct MessageHandlers { const char* topic; messageHandlerFP fp; } *messageHandlers; // Message handlers are indexed by subscription topic // how many concurrent operations should we allow? Each one will require a function pointer struct Operations { unsigned short id; resultHandlerFP fp; const char* topic; // if this is a publish, store topic name in case republishing is required Message* message; // for publish, Timer timer; // to check if the command has timed out } *operations; // result handlers are indexed by packet ids static void threadfn(void* arg); }; } template<class Network, class Timer, class Thread, class Mutex> void MQTT::Client<Network, Timer, Thread, Mutex>::threadfn(void* arg) { ((Client<Network, Timer, Thread, Mutex>*) arg)->run(NULL); } template<class Network, class Timer, class Thread, class Mutex> MQTT::Client<Network, Timer, Thread, Mutex>::Client(Network* network, Limits limits) : limits(limits), packetid() { this->thread = 0; this->ipstack = network; this->ping_timer = Timer(); this->ping_outstanding = 0; // How to make these memory allocations portable? I was hoping to avoid the heap buf = new char[limits.MAX_MQTT_PACKET_SIZE]; readbuf = new char[limits.MAX_MQTT_PACKET_SIZE]; this->operations = new struct Operations[limits.MAX_CONCURRENT_OPERATIONS]; for (int i = 0; i < limits.MAX_CONCURRENT_OPERATIONS; ++i) operations[i].id = 0; this->messageHandlers = new struct MessageHandlers[limits.MAX_MESSAGE_HANDLERS]; for (int i = 0; i < limits.MAX_MESSAGE_HANDLERS; ++i) messageHandlers[i].topic = 0; } template<class Network, class Timer, class Thread, class Mutex> int MQTT::Client<Network, Timer, Thread, Mutex>::sendPacket(int length, int timeout) { int sent = 0; while (sent < length) sent += ipstack->write(&buf[sent], length, timeout); if (sent == length) ping_timer.countdown(this->keepAliveInterval); // record the fact that we have successfully sent the packet return sent; } template<class Network, class Timer, class Thread, class Mutex> int MQTT::Client<Network, Timer, Thread, Mutex>::decodePacket(int* value, int timeout) { char c; int multiplier = 1; int len = 0; const int MAX_NO_OF_REMAINING_LENGTH_BYTES = 4; *value = 0; do { int rc = MQTTPACKET_READ_ERROR; if (++len > MAX_NO_OF_REMAINING_LENGTH_BYTES) { rc = MQTTPACKET_READ_ERROR; /* bad data */ goto exit; } rc = ipstack->read(&c, 1, timeout); if (rc != 1) goto exit; *value += (c & 127) * multiplier; multiplier *= 128; } while ((c & 128) != 0); exit: return len; } /** * If any read fails in this method, then we should disconnect from the network, as on reconnect * the packets can be retried. * @param timeout the max time to wait for the packet read to complete, in milliseconds * @return the MQTT packet type, or -1 if none */ template<class Network, class Timer, class Thread, class Mutex> int MQTT::Client<Network, Timer, Thread, Mutex>::readPacket(int timeout) { int rc = -1; MQTTHeader header = {0}; int len = 0; int rem_len = 0; /* 1. read the header byte. This has the packet type in it */ if (ipstack->read(readbuf, 1, timeout) != 1) goto exit; len = 1; /* 2. read the remaining length. This is variable in itself */ decodePacket(&rem_len, timeout); len += MQTTPacket_encode(readbuf + 1, rem_len); /* put the original remaining length back into the buffer */ /* 3. read the rest of the buffer using a callback to supply the rest of the data */ if (ipstack->read(readbuf + len, rem_len, timeout) != rem_len) goto exit; header.byte = readbuf[0]; rc = header.bits.type; exit: return rc; } template<class Network, class Timer, class Thread, class Mutex> int MQTT::Client<Network, Timer, Thread, Mutex>::deliverMessage(MQTTString* topic, Message* message) { int rc = -1; // we have to find the right message handler - indexed by topic for (int i = 0; i < limits.MAX_MESSAGE_HANDLERS; ++i) { if (messageHandlers[i].topic && MQTTPacket_equals(topic, (char*)messageHandlers[i].topic)) { messageHandlers[i].fp(message); rc = 0; break; } } return rc; } template<class Network, class Timer, class Thread, class Mutex> int MQTT::Client<Network, Timer, Thread, Mutex>::cycle(int timeout) { /* get one piece of work off the wire and one pass through */ // read the socket, see what work is due int packet_type = readPacket(timeout); printf("packet type %d\n", packet_type); int len, rc; switch (packet_type) { case CONNACK: if (this->thread) { Result res = {this, 0}; if (MQTTDeserialize_connack(&res.connack_rc, readbuf, limits.MAX_MQTT_PACKET_SIZE) == 1) ; connectHandler(&res); connectHandler.detach(); // only invoke the callback once } break; case PUBACK: if (this->thread) ; //call resultHandler case SUBACK: break; case PUBLISH: MQTTString topicName; Message msg; rc = MQTTDeserialize_publish((int*)&msg.dup, (int*)&msg.qos, (int*)&msg.retained, (int*)&msg.id, &topicName, (char**)&msg.payload, (int*)&msg.payloadlen, readbuf, limits.MAX_MQTT_PACKET_SIZE); if (msg.qos == QOS0) deliverMessage(&topicName, &msg); break; case PUBREC: int type, dup, mypacketid; if (MQTTDeserialize_ack(&type, &dup, &mypacketid, readbuf, limits.MAX_MQTT_PACKET_SIZE) == 1) ; // must lock this access against the application thread, if we are multi-threaded len = MQTTSerialize_ack(buf, limits.MAX_MQTT_PACKET_SIZE, PUBREL, 0, mypacketid); rc = sendPacket(len, timeout); // send the PUBREL packet if (rc != len) goto exit; // there was a problem break; case PUBCOMP: break; case PINGRESP: ping_outstanding = false; break; } keepalive(); exit: return packet_type; } template<class Network, class Timer, class Thread, class Mutex> int MQTT::Client<Network, Timer, Thread, Mutex>::keepalive() { int rc = 0; if (keepAliveInterval == 0) goto exit; if (ping_timer.expired()) { if (ping_outstanding) rc = -1; else { int len = MQTTSerialize_pingreq(buf, limits.MAX_MQTT_PACKET_SIZE); rc = sendPacket(len, 1000); // send the ping packet if (rc != len) rc = -1; // indicate there's a problem else ping_outstanding = true; } } exit: return rc; } template<class Network, class Timer, class Thread, class Mutex> void MQTT::Client<Network, Timer, Thread, Mutex>::run(void const *argument) { while (true) cycle(ping_timer.left_ms()); } // only used in single-threaded mode where one command at a time is in process template<class Network, class Timer, class Thread, class Mutex> int MQTT::Client<Network, Timer, Thread, Mutex>::waitfor(int packet_type, Timer& atimer) { int rc = -1; do { if (atimer.expired()) break; // we timed out } while ((rc = cycle(atimer.left_ms())) != packet_type); return rc; } template<class Network, class Timer, class Thread, class Mutex> int MQTT::Client<Network, Timer, Thread, Mutex>::connect(MQTTPacket_connectData* options, resultHandler resultHandler) { connect_timer.countdown(limits.command_timeout); MQTTPacket_connectData default_options = MQTTPacket_connectData_initializer; if (options == 0) options = &default_options; // set default options if none were supplied this->keepAliveInterval = options->keepAliveInterval; ping_timer.countdown(this->keepAliveInterval); int len = MQTTSerialize_connect(buf, limits.MAX_MQTT_PACKET_SIZE, options); int rc = sendPacket(len, connect_timer.left_ms()); // send the connect packet if (rc != len) goto exit; // there was a problem if (resultHandler == 0) // wait until the connack is received { // this will be a blocking call, wait for the connack if (waitfor(CONNACK, connect_timer) == CONNACK) { int connack_rc = -1; if (MQTTDeserialize_connack(&connack_rc, readbuf, limits.MAX_MQTT_PACKET_SIZE) == 1) rc = connack_rc; } } else { // set connect response callback function connectHandler.attach(resultHandler); // start background thread this->thread = new Thread((void (*)(void const *argument))&MQTT::Client<Network, Timer, Thread, Mutex>::threadfn, (void*)this); } exit: return rc; } template<class Network, class Timer, class Thread, class Mutex> int MQTT::Client<Network, Timer, Thread, Mutex>::findFreeOperation() { int found = -1; for (int i = 0; i < limits.MAX_CONCURRENT_OPERATIONS; ++i) { if (operations[i].id == 0) { found = i; break; } } return found; } template<class Network, class Timer, class Thread, class Mutex> int MQTT::Client<Network, Timer, Thread, Mutex>::subscribe(const char* topicFilter, enum QoS qos, messageHandler messageHandler, resultHandler resultHandler) { int index = 0; if (this->thread) index = findFreeOperation(); Timer& atimer = operations[index].timer; atimer.countdown(limits.command_timeout); MQTTString topic = {(char*)topicFilter, 0, 0}; int len = MQTTSerialize_subscribe(buf, limits.MAX_MQTT_PACKET_SIZE, 0, packetid.getNext(), 1, &topic, (int*)&qos); int rc = sendPacket(len, atimer.left_ms()); // send the subscribe packet if (rc != len) goto exit; // there was a problem /* wait for suback */ if (resultHandler == 0) { // this will block if (waitfor(SUBACK, atimer) == SUBACK) { int count = 0, grantedQoS = -1, mypacketid; if (MQTTDeserialize_suback(&mypacketid, 1, &count, &grantedQoS, readbuf, limits.MAX_MQTT_PACKET_SIZE) == 1) rc = grantedQoS; // 0, 1, 2 or 0x80 if (rc != 0x80) { for (int i = 0; i < limits.MAX_MESSAGE_HANDLERS; ++i) { if (messageHandlers[i].topic == 0) { messageHandlers[i].topic = topicFilter; messageHandlers[i].fp.attach(messageHandler); rc = 0; break; } } } } } else { // set subscribe response callback function } exit: return rc; } template<class Network, class Timer, class Thread, class Mutex> int MQTT::Client<Network, Timer, Thread, Mutex>::unsubscribe(const char* topicFilter, resultHandler resultHandler) { int index = 0; if (this->thread) index = findFreeOperation(); Timer& atimer = operations[index].timer; MQTTString topic = {(char*)topicFilter, 0, 0}; int len = MQTTSerialize_unsubscribe(buf, buflen, 0, packetid.getNext(), 1, &topic); int rc = sendPacket(len, atimer); // send the subscribe packet if (rc != len) goto exit; // there was a problem /* wait for unsuback */ if (resultHandler == 0) { // this will block if (waitfor(UNSUBACK) == UNSUBACK) { int mypacketid; if (MQTTDeserialize_unsuback(&mypacketid, readbuf, readbuflen) == 1) rc = 0; } } else { // set unsubscribe response callback function } exit: command_timer.stop(); command_timer.reset(); return rc; } template<class Network, class Timer, class Thread, class Mutex> int MQTT::Client<Network, Timer, Thread, Mutex>::publish(const char* topicName, Message* message, resultHandler resultHandler) { command_timer.start(); MQTTString topic = {(char*)topicName, 0, 0}; message->id = packetid.getNext(); int len = MQTTSerialize_publish(buf, buflen, 0, message->qos, message->retained, message->id, topic, message->payload, message->payloadlen); int rc = sendPacket(len); // send the subscribe packet if (rc != len) goto exit; // there was a problem /* wait for acks */ if (resultHandler == 0) { if (message->qos == QOS1) { if (waitfor(PUBACK) == PUBACK) { int type, dup, mypacketid; if (MQTTDeserialize_ack(&type, &dup, &mypacketid, readbuf, readbuflen) == 1) rc = 0; } } else if (message->qos == QOS2) { if (waitfor(PUBCOMP) == PUBCOMP) { int type, dup, mypacketid; if (MQTTDeserialize_ack(&type, &dup, &mypacketid, readbuf, readbuflen) == 1) rc = 0; } } } else { // set publish response callback function } exit: command_timer.stop(); command_timer.reset(); return rc; } #endif
__label__pos
0.981564
Commit 39ed6689 authored by Nils Christian Ehmke's avatar Nils Christian Ehmke Added some tests parent 8247ef2f ......@@ -3,6 +3,8 @@ package kieker.diagnosis.service.data; import static org.hamcrest.collection.IsCollectionWithSize.hasSize; import static org.hamcrest.collection.IsEmptyCollection.empty; import static org.hamcrest.core.Is.is; import static org.hamcrest.core.IsNull.nullValue; import static org.hamcrest.number.IsCloseTo.closeTo; import static org.hamcrest.number.OrderingComparison.greaterThan; import static org.junit.Assert.assertThat; ......@@ -283,6 +285,55 @@ public class MonitoringLogServiceTest { ivService.importMonitoringLog( directory ); } @Test public void testTraceInDetail( ) throws Exception { // Prepare the data writeRecord( new TraceMetadata( 1L, 0L, "0", "host", 0L, 0 ) ); writeRecord( new BeforeOperationEvent( 1000000L, 1L, 0, "op1", "class1" ) ); writeRecord( new BeforeOperationEvent( 2000000L, 1L, 0, "op2", "class2" ) ); writeRecord( new AfterOperationEvent( 2500000L, 1L, 0, "op2", "class2" ) ); writeRecord( new AfterOperationFailedEvent( 4000000L, 1L, 0, "op1", "class1", "cause" ) ); writeMappingFile( ); finishWriting( ); // Import the directory final File directory = ivTemporaryFolder.getRoot( ); ivService.importMonitoringLog( directory ); // Make sure that the import worked as intended assertThat( ivService.getMethods( ), hasSize( 2 ) ); assertThat( ivService.getAggreatedMethods( ), hasSize( 2 ) ); assertThat( ivService.getTraceRoots( ), hasSize( 1 ) ); assertThat( ivService.getProcessedBytes( ), is( greaterThan( 0L ) ) ); // Now some advanced checks final MethodCall firstMethod = ivService.getMethods( ).get( 0 ); assertThat( firstMethod.getHost( ), is( "host" ) ); assertThat( firstMethod.getClazz( ), is( "class1" ) ); assertThat( firstMethod.getMethod( ), is( "op1" ) ); assertThat( firstMethod.getException( ), is( "cause" ) ); assertThat( firstMethod.getTimestamp( ), is( 1L ) ); assertThat( firstMethod.getDuration( ), is( 3000000L ) ); assertThat( (double) firstMethod.getPercent( ), is( closeTo( 100.0, 0.01 ) ) ); assertThat( firstMethod.getTraceDepth( ), is( 2 ) ); assertThat( firstMethod.getTraceId( ), is( 1L ) ); assertThat( firstMethod.getTraceSize( ), is( 2 ) ); final MethodCall secondMethod = ivService.getMethods( ).get( 1 ); assertThat( secondMethod.getHost( ), is( "host" ) ); assertThat( secondMethod.getClazz( ), is( "class2" ) ); assertThat( secondMethod.getMethod( ), is( "op2" ) ); assertThat( secondMethod.getException( ), is( nullValue( ) ) ); assertThat( secondMethod.getTimestamp( ), is( 2L ) ); assertThat( secondMethod.getDuration( ), is( 500000L ) ); assertThat( (double) secondMethod.getPercent( ), is( closeTo( 16.66, 0.01 ) ) ); assertThat( secondMethod.getTraceDepth( ), is( 1 ) ); assertThat( secondMethod.getTraceId( ), is( 1L ) ); assertThat( secondMethod.getTraceSize( ), is( 1 ) ); assertThat( ivService.getTraceRoots( ).get( 0 ), is( firstMethod ) ); } private void writeRecord( final AbstractMonitoringRecord aRecord ) { // Register the record name final int recordKey = ivStringRegistry.get( aRecord.getClass( ).getName( ) ); ...... package kieker.diagnosis.service.methods; import static org.hamcrest.core.Is.is; import static org.junit.Assert.assertThat; import org.junit.Before; import org.junit.Test; import com.google.inject.Guice; import com.google.inject.Injector; import kieker.diagnosis.KiekerTraceDiagnosisModule; import kieker.diagnosis.service.data.MethodCall; import kieker.diagnosis.service.data.MonitoringLogService; public class MethodsServiceTest { private MethodsService ivMethodsService; private MonitoringLogService ivDataService; @Before public void setUp( ) { final Injector injector = Guice.createInjector( new KiekerTraceDiagnosisModule( ) ); ivMethodsService = injector.getInstance( MethodsService.class ); ivDataService = injector.getInstance( MonitoringLogService.class ); } @Test public void testSimpleSearch( ) { // Prepare some data for the search createMethodCall( "host1", "class1", "op1", "cause1" ); createMethodCall( "host1", "class2", "op1", "cause1" ); createMethodCall( "host1", "class1", "op3", "cause1" ); createMethodCall( "host1", "class1", "op3", "cause4" ); assertThat( ivMethodsService.countMethods( ), is( 4 ) ); // Now search with a filter final MethodsFilter methodsFilter = new MethodsFilter( ); methodsFilter.setHost( "host1" ); assertThat( ivMethodsService.searchMethods( methodsFilter ).size( ), is( 4 ) ); methodsFilter.setClazz( "class1" ); assertThat( ivMethodsService.searchMethods( methodsFilter ).size( ), is( 3 ) ); methodsFilter.setMethod( "op3" ); assertThat( ivMethodsService.searchMethods( methodsFilter ).size( ), is( 2 ) ); methodsFilter.setException( "cause4" ); assertThat( ivMethodsService.searchMethods( methodsFilter ).size( ), is( 1 ) ); } @Test public void testSearchTypeFilter( ) { // Prepare some data for the search createMethodCall( "host1", "class1", "op1", "cause1" ); createMethodCall( "host1", "class2", "op1", null ); createMethodCall( "host1", "class1", "op3", "cause1" ); createMethodCall( "host1", "class1", "op3", "cause4" ); assertThat( ivMethodsService.countMethods( ), is( 4 ) ); // Now search with a filter final MethodsFilter methodsFilter = new MethodsFilter( ); methodsFilter.setSearchType( SearchType.ALL ); assertThat( ivMethodsService.searchMethods( methodsFilter ).size( ), is( 4 ) ); methodsFilter.setSearchType( SearchType.ONLY_FAILED ); assertThat( ivMethodsService.searchMethods( methodsFilter ).size( ), is( 3 ) ); methodsFilter.setSearchType( SearchType.ONLY_SUCCESSFUL ); assertThat( ivMethodsService.searchMethods( methodsFilter ).size( ), is( 1 ) ); } private void createMethodCall( final String aHost, final String aClazz, final String aMethod, final String aException ) { final MethodCall methodCall = new MethodCall( ); methodCall.setHost( aHost ); methodCall.setClazz( aClazz ); methodCall.setMethod( aMethod ); methodCall.setException( aException ); ivDataService.getMethods( ).add( methodCall ); } } Markdown is supported 0% or You are about to add 0 people to the discussion. Proceed with caution. Finish editing this message first! Please register or to comment
__label__pos
0.903431
Q: What is 29/49 as a percent? Accepted Solution A: Solution: 29/49 as a percent is 59.184% Methods Method 1 – Converting 29/49 Into a Percentage: In a fraction, we can see how many “pieces” of a number are present (in the numerator) compared to how many pieces would make up the whole (the denominator). “Percent” means “per hundred”, which is like asking the question “how many pieces would there be if there were 100 pieces possible?” For example, if we look at the percentage 50%, that means we have 50 pieces of the possible 100. Re-writing this in fraction form, we see 50/100. We can start the process of converting a fraction into a percent, by figuring out how to adjust the fraction so that the denominator will be 100. First, divide 100 by the denominator: 100 49 = 2.041 \frac{100}{49} = 2.041 49 100 ​ = 2.041 Then we can multiply both the numerator and denominator by this number: 29 ∗ 2.041 49 ∗ 2.041 = 59.184 100 \frac{29*2.041}{49*2.041} = \frac{59.184}{100} 49 ∗ 2.041 29 ∗ 2.041 ​ = 100 59.184 ​ This works because multiplying both the numerator and the denominator by the same number is like multiplying it by 1. (2.041 / 2.041 = 1) Re-writing the result as a percentage, we can see that 29/49 as a percentage is 59.184%. Method 2 – Converting 29/49 Into a Percentage Using Decimals: Another common way to convert a fraction into a percentage is to first convert the fraction into a decimal. To convert 29/49 into a percentage, you would first convert 29/49 into a decimal by dividing the numerator by the denominator: 29 49 = 0.592 \frac{29}{49} = 0.592 49 29 ​ = 0.592 Once you have converted the fraction into a decimal, you can simply multiply by 100 to get the percentage: 0.592 x 100 = 59.184 And there you go! Now we can see that 29/49 as a percentage is 59.184%, the same way we did with the first method. Now you know of two ways you can convert 29/49 into a percentage. The best way to master these methods is to practice! Grab a pencil and paper, and come up with some of your own fractions, and become a master at converting them into percentages! Practice more percentage conversion problems With a just a few more problems, you could become a pro at converting fractions to percentages. You can try some more right now! What is 35/32 as a percent? What is 56/65 as a percent? What is 82/96 as a percent? What is 36/50 as a percent? What is 84/61 as a percent?
__label__pos
0.98685
[Free] 2019(Nov) EnsurePass F5 101 Dumps with VCE and PDF 271-280 Get Full Version of the Exam http://www.EnsurePass.com/101.html Question No.271 Which of the following is not a method of protection for user-input parameters? 1. Value extraction 2. Attack signatures 3. Length restriction 4. Meta character enforcement Correct Answer: A Question No.272 Which of the following business benefits does storage tiering offer to customers? 1. Reduces time for backups because data on the secondary tier can have a less time intensive backup policed applied to it. 2. All of the above. 3. Enables customers to apply a more aggressive RTO/RPO for business critical Tier-1 unstructured data. 4. Reduces money spent on storage since the majority of data can be moved to less expensive secondary tier storage. Correct Answer: B Question No.273 Which two of these statements about OneConnect are true? (Choose two.) 1. It decreases the CPU load on LTM 2. It aggregates multiple client connections into a single server connection 3. It decreases the amount of traffic between multiple clients and LTM 4. It requires SNAT to be configured 5. It decreases the CPU load on pool members Correct Answer: BE Question No.274 A monitor has been defined with an alias port of 443. All other options are left at their defaults. The administrator wishes to assign it to a pool of members where the members#39; ports vary. Which is the result? 1. For each member, if the member port is not 443, the member will be marked down. For each member, if the member port is not 443, the member will be marked down. 2. For each member, the monitor will test member node at port 443. For each member, the monitor will test the member node at port 443. 3. For each member. If it is running an SSL service at the member port, the monitor may work.Otherwise, the monitor will fail and the member for each member, if it is running an SSL service at the member port, the monitor may work. Otherwise, the monitor will fail and the member will be marked down. 4. This assignment is not allowed since the port do not match. Correct Answer: B Question No.275 A site would like to ensure that a given web server#39;s default page is being served correctly prior to sending it client traffic. They assigned the A site would like to ensure that a given web server#39;s default page is being served correctly prior to sending it client traffic. They be assigned the default HTTP monitor to the pool. What would the member status be if it sent an unexpected response to the GET request default HTTP monitor to the pool? 1. The pool member would be marked offline (red). 2. The pool member would be marked online (green). 3. The pool member would be marked unknown (blue). 4. The pool member would alternate between red and green. Correct Answer: B Question No.276 Assuming other failover settings are at their default state, what would occur if the failover cable were to be disconnected for five seconds and then reconnected. 1. As long as network communication is not lost, no change will occur. 2. Nothing. Failover due to loss of voltage will not occur if the voltage is lost for less than ten seconds. 3. When the cable is disconnected, both systems will become active. When the voltage is restored, unit two will revert to standby mode. 4. When the cable is disconnected, both systems will become active. When the voltage is restored, both systems will maintain active mode. Correct Answer: C Question No.277 Which two of the following are costs businesses may face in dealing with unstructured data? (Choose two.) 1. Lost productivity due to server downtime 2. Buying backup media 3. Buying additional storage capacity 4. Paying to convert unstructured data into structured data Correct Answer: BC Question No.278 Basic F5 IP Geo location provides which four types of client information? (Choose four.) 1. State 2. Continent 3. Postal code 4. City 5. Carrier 6. Country Correct Answer: ABEF Question No.279 Which item is NOT a function of a properly deployed and configured ASM? 1. Detects attacks 2. Stops hackers from attacking 3. Provides protection visibility 4. Provides security agility Correct Answer: B Question No.280 Why does deploying LTM into an existing network immediately improve security? 1. Only requests for specific ports are allowed through LTM. 2. All traffic through LTM is checked for DDoS attacks. 3. No traffic A allowed through LTM until it has been specified. 4. All users must authenticate before accessing applications through LTM. 5. Only LAN administrators can access resources through LTM. Correct Answer: C Get Full Version of the Exam 101 Dumps 101 VCE and PDF You must be logged in to post a comment. Proudly powered by WordPress   Premium Style Theme by www.gopiplus.com
__label__pos
0.999185
1 GATE CSE 1994 True or False +2 -0 Let $$p$$ and $$q$$ be propositions. Using only the truth table decide whether $$p \Leftrightarrow q$$ does not imply $$p \to \sim q$$ is true or false. A TRUE B FALSE 2 GATE CSE 1994 MCQ (Single Correct Answer) +1 -0.3 Let A and B be any two arbitrary events, then, which one of the following is true? A $$P\,(A\, \cap \,B)\, = \,P\,(A)\,P\,(B)$$ B $$P\,(A\, \cup \,B)\, = \,P\,(A)\, + \,P\,(B)$$ C $$P\,(A\,\left| {B) = \,} \right.P\,(A\, \cap \,B)\,P\,(B)$$ D $$P\,(A\, \cup \,B)\, < \,\,P\,(A)\, + \,P\,(B)$$ 3 GATE CSE 1994 MCQ (Single Correct Answer) +2 -0.6 Some group (G, o) is known to be abelian. Then, which one of the following is true for G? A $$g = {g^{ - 1}}\,$$ for every $$g\, \in \,G$$. B $$g = {g^{ 2}}\,$$ for every $$g\, \in \,G$$. C $${(goh)^2} = \,{g^2}\,o\,\,{h^2}$$ for every g, $$h\, \in \,G$$. D G is of finite order. 4 GATE CSE 1994 MCQ (Single Correct Answer) +2 -0.6 The number of substrings (of all length inclusive) that can be formed from a character string of length $$n$$ is A $$n$$ B $${n^2}$$ C $${{n\left( {n - 1} \right)} \over 2}$$ D $${{n\left( {n + 1} \right)} \over 2}$$ EXAM MAP Joint Entrance Examination JEE MainJEE AdvancedWB JEEBITSATMHT CET Medical NEET Graduate Aptitude Test in Engineering GATE CSEGATE ECEGATE EEGATE MEGATE CEGATE PIGATE IN
__label__pos
0.995337
1 $\begingroup$ Assuming we have a single-level (L1) cache and main memory, what are some of the advantages and disadvantages of having a larger cache block size (considering average memory access time). The only ones I can think of are that a larger block size could increase the hit rate when adjacent memory locations are accessed, i.e. good for spatial locality. I can't think of any other advantages; are there? As for disadvantages, I assume that having a larger block size increases the cache size, thereby increasing cost (since L1 cache is expensive), but I can't see any disadvantages relating to average memory access time. Are there? $\endgroup$ 3 Answers 3 2 $\begingroup$ An increased block size is indeed good for spacial locality. On the other hand, a large block size increases the possibility of fragmentation and false sharing (in multiprocessor system). Another way of thinking about this problem is if your cache size is fixed (based on cost, etc.), and you are changing the block size. In this case, as you increase your cache block size, accessing adjacent memory locations will have more hits (spacial locality), but there is a disadvantage considering temporal locality. Consider the extreme case, in which the cache has one block. Of course, this would be good for spacial locality, but it is terrible for a program repeatedly accessing two memory locations that are at least one full block size away from each other. The cache will miss every time. EDIT Consider this: you have a process with data in consecutive memory locations "A" through "H" of size "1." You have a warm cache of size "4" (ignoring compulsory misses, the misses/repeat below are average case) and an LRU cache replacement policy. Let the cache block size be 4 (the "largest" block size case). • For repeated, in-order memory accesses A-H, this cache has 2 misses/repeat. • For repeated, out-of-order memory accesses A, B, E, F, C, D, G, H, this cache has 4 misses/repeat. • For repeated, out-of-order memory accesses A, B, E, F, this cache has 2 misses/repeat. • For repeated, random memory accesses A, E, B, F, this cache has 4 misses/repeat. Let the cache block size be 1 (the "smallest" block size case). • For repeated, in-order memory accesses A-H, this cache has 8 misses/repeat. • For repeated, out-of-order memory accesses A, B, E, F, C, D, G, H, this cache has 8 misses/repeat. • For repeated, out-of-order memory accesses A, B, E, F, this cache has 0 misses/repeat. • For repeated, random memory accesses A, E, B, F, this cache has 0 misses/repeat. Let the cache block size be 2 (the "optimal" block size case). • For repeated, in-order memory accesses A-H, this cache has 4 misses/repeat. • For repeated, out-of-order memory accesses A, B, E, F, C, D, G, H, this cache has 4 misses/repeat. • For repeated, out-of-order memory accesses A, B, E, F, this cache has 0 misses/repeat. • For repeated, random memory accesses A, E, B, F, this cache has 2 misses/repeat. The "largest" block size case is best for repeated, in-order memory accesses (best for spacial locality) and worst for repeated, random memory accesses (worst for temporal locality). The "smallest" block size case is worst for repeated, in-order memory accesses (worst for spacial locality) and best for repeated, random memory accesses (best for temporal locality). It is good for out-of-order memory accesses, depending on the size of the working set. The "optimal" block size case is good for repeated, in-order memory accesses and good for repeated, random memory accesses, but it is not the "best" for either case. $\endgroup$ 3 • $\begingroup$ wouldn't that also be the case if the blocks were normal size, with the main difference being that smaller blocks have a smaller miss penalty $\endgroup$ – dmnte Commented May 16, 2016 at 16:08 • $\begingroup$ What do you mean by "that?" "Larger" block sizes suffer from an increased possibility of fragmentation and false sharing, and they are (relative to a "smaller" block sizes) bad for temporal locality (assuming a fixed cache size). $\endgroup$ – user50991 Commented May 16, 2016 at 16:42 • $\begingroup$ I have edited my answer to include a simple example. $\endgroup$ – user50991 Commented May 16, 2016 at 18:21 2 $\begingroup$ Advantages The advantages of larger block size include: smaller tag storage (or larger cache capacity for a given tag storage budget), greater bandwidth efficiency, memory error correction code efficiency, potentially improved way prediction/memoization, potentially larger access bandwidth, effective prefetching under sufficient spatial locality in the reference stream, and reduced coherence directory overhead (and potentially reduced coherence traffic overhead). Smaller tags (doubling block size halves the number of tags and removes one bit per tag) can be a significant consideration if tag access latency or storage capacity is a significant concern. In the past (and for huge off-chip caches) greatly reducing the number of tags can make the difference between fitting the tags on chip (with latency and pin-count benefits) and having tags off-chip. Larger cache size (under constrained tag storage) and lower tag access latency have obvious benefits for average memory access time. (Obviously, at extremely small block sizes, the total storage overhead for tags may noticeably impact data capacity.) (The size of an off-chip cache could also be chosen after processor chip manufacture with little additional hardware cost in the processor chip, previously not uncommon for off-chip L2 caches. Incidentally, doubling block size would also allow doubling the cacheable memory address space with the same tags.) Greater bandwidth efficiency comes from knowing that a larger chunk of memory will be retrieved. With a given DRAM burst length, the size of an access constraints the width of the interface. (An implementation could always fetch the adjacent cache block of data to gain the advantage for reads at a lower capacity cost, storing such in a small prefetch buffer. Writeback would be more complicated but they are less common and writeback bandwidth constrained workloads would tend to have spatial locality that could be exploited by checking for adjacent blocks.) ECC encoding is more efficient given a larger block size. While this would not effect the L1 overhead since L1 would handle sub-block writes, the overhead in memory for a given amount of correction would be lower. Fetch size can also impact the practicality of chip-scale redudancy (e.g., IBM's ChipKill). A wider interface can use more DRAM chips with the same commodity bit width and burst length, reducing the overhead to provide a given level of correction even when an entire DRAM chip is unreliable because the fraction of information coming from one chip is less. Memory ECC encoding is typically over chunks smaller than blocks, but extending the encoding beyond fetch units is more complex and adds overhead (in theory, information required for correction could be shared over more than one chunk, requiring writebacks to have larger granularity). A larger block size can also improve way prediction accuracy by reducing choice (all chunks within a cache block that would be separate cache blocks with a smaller block size are in the same way by definition) and exposing greater spatial locality (this improves warm-up and hit rate in some predictors). For partial virtual tag way predictors, larger block sizes increase the number of tag bits available for a given storage budget (which is constrained by latency), increasing accuracy. Way prediction can provide lower access latency. Way memoization for streaming accesses (most instruction cache accesses with modest fetch width and a significant number of data accesses) will be available for more accesses before the way changes. Larger block size also facilitates higher access bandwidth by facilitating a larger number of banks within the same cache block. By using what is effectively single-cycle tag memoization, any banks within a block can supply data with only a single tag check; a larger block increases the chance that more than one access will be within the block. This does not decrease the latency of an individual access but allows more accesses to be started earlier, reducing the effective average memory access time. The advantage of effective prefetching under adequate spatial locality of access is the commonly presented advantage. This advantage is reduced given prefetching which can dynamically exploit spatial locality when present without the storage overhead of larger blocks. The storage overhead for directory-based coherence is also reduced with a larger block size. Full directories store the coherence state for each block, often by tracking it for each memory block, so using a larger block obviously reduces the relative overhead. Similarly, caching directory entries becomes more effective. (Coherence traffic may also be reduced due to spatial locality. As with prefetching, smaller blocks can be used while dynamically choosing larger coherence requests to reduce the number of requests on the network. False sharing increases coherence traffic.) (Software invalidation of large cacheable regions may also be faster with larger blocks as fewer blocks need to be explicity invalidated.) Disadvantages The disadvantages of larger block size include: wasted bandwidth and storage under lower spatial locality, false sharing issues, high latency if early use is not supported, and higher conflict rate. Memory bandwidth is wasted when data that is fetched is not used and when data that was not modified is written back to memory (or unnecessarily fetched under read for ownership). With a larger block size, both of these are more likely to occur because spatial locality of accesses is limited. Cache capacity is wasted when data is fetched but not used (for reasonable size blocks, tag storage is much less than data storage, so increased tag overhead with small blocks is generally not significant in terms of total storage capacity). Under invalidation-based coherence protocols, cache blocks are invalidated when a write is made anywhere with the block by another agent. With larger blocks, more of the block will be invalidated. This false sharing is effectively an expression of limits of spatial locality of writes in a multithread context. This issue can be reduce using sectored cache blocks so that only the sector is invalidated. While traditional sectoring would waste capacity when a significant number of sub-blocks are invalid, larger cache blocks with sectoring may be a good choice. (Sectoring can also facilitate software compatibility when cache block size was exposed.) In theory, instruction and data cache false sharing is also possible, though writes to active code space are discouraged in modern high performance systems. If the entire cache block must be loaded before any values from it are used, then a larger cache block (with constant bandwidth) will increase miss latency. (In theory, if multiple cache blocks could be fetched from memory in parallel, the fill latency could also impact effective bandwidth if separate buffers were not provided for each potential ongoing fill. Sometimes miss handling might be stalled waiting for a block fill to complete and to free buffer entry; this would be more likely to happen with longer fill latency.) A larger block size tends to increase the number of conflict misses; with a smaller number of sets, the chance of multiple near-in-time access having the same index is higher. Ironically, spatial locality of access can increase the rate of conflicts under traditional indexing because if one access conflicts, nearby accesses are also likely to conflict. (Skewed associativity could reduce this effect.) Overview With larger caches, the conflict misses from larger blocks are less common because either the number of sets is increased (so the chance of indices matching is lower) or the number of ways is increased (so matching indices are less likely to evict a useful cache block). The larger access latency of larger caches also reduces the latency penalty of higher associativity. In addition, larger caches increase the residency time of a block which tends to increase the spatial locality. With even the simplest implementation of prefetching (loading all the blocks equivalent to a larger block), the spatial locality based hit rate benefit of larger block sizes effectively disappears while retaining finer-grained replacement choice (and coherence invalidation). (For L2+ caches, which are not considered in this question, tag storage size can be a significant consideration since such caches are typically accessed with tag-data phasing and farther caches are not probed until a miss is determined. Smaller tag storage then means lower hit latency and lower miss determination latency. In addition, if the tag storage less area-efficient, the cost of tag storage becomes more significant; the SRAM tags with DRAM data storage in some IBM POWER L3 caches is a case where smaller bit count tag storage has a greater benefit.) 64 bytes appears to be a fairly settled block size for "general purpose" L1 caches in part from the commoditization of 64-bit memory interfaces with DRAM burst length of 8 (and software optimizations based on this size) but also matching reasonably well "typical" spatial locality for "typical" capacity L1 caches. Some server-oriented processors use larger blocks in part because larger capacity last-level caches are often helpful (having closer matching of L1 and L2 block sizes can be helpful) and because spatial locality may be more common. Some GPUs use larger L1 cache blocks, presumably to support greater memory bandwidth and access bandwidth with the common case also having considerable spatial locality with weak temporal locality (streaming accesses or nearly so). $\endgroup$ 1 $\begingroup$ Increasing the cache line size while keeping the number of lines constant increases the cache size at massively increased cost. Not something I'd count as disadvantage, you get what you pay for. I'd assume a fixed cache size, therefore fewer lines. Which is bad if cache lines are only partially used. $\endgroup$ Your Answer By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.829835
BlinkStick Nano SetColor causing 'System.IO.IOException' in HidSharp.dll Hi. I have a 2-LED BlinkStick Nano and I’m editing the SetColor instruction in the C-sharp code below to turn off LED1 and experiment with different LED2 colors. On the first run, the program worked. However, the program now outputs the following when run in debug mode in Visual Studio 2022: Device BS044857-3.0 opened successfully Exception thrown: ‘System.IO.IOException’ in HidSharp.dll Device col 0: #000000 Device col 1: #FFFFFF Serial: BS044857-3.0 What might be causing the exception when the program reaches device.SetColor, please? When the exception occurs, the LED color doesn’t change to the desired RGB settings in SetColor. Is the syntax correct? private void TestNano() { Console.WriteLine("Test for BlinkStick Nano.\r\n"); BlinkStick device = BlinkStick.FindFirst(); if (device != null && device.OpenDevice()) { Console.WriteLine(String.Format("Device {0} opened successfully", device.Serial)); // Use the function SetColor(byte channel, byte index, byte r, byte g, byte b) to set the colour device.SetColor(0, 0, 0, 0, 0); //Turns off the first LED // device.SetColor(0, 1, 255, 255, 255); //Sets colour of the second LED white for first debug device.SetColor(0, 1, 255, 0, 0); //Sets colour red on second debug start byte cr0; byte cg0; byte cb0; byte cr1; byte cg1; byte cb1; device.GetColor(0, out cr0, out cg0, out cb0); device.GetColor(1, out cr1, out cg1, out cb1); Console.WriteLine(String.Format(" Device col 0: #{0:X2}{1:X2}{2:X2}", cr0, cg0, cb0)); Console.WriteLine(String.Format(" Device col 1: #{0:X2}{1:X2}{2:X2}", cr1, cg1, cb1)); Console.WriteLine(" Serial: " + device.Serial); Thread.Sleep(5000); device.TurnOff(); } device.Dispose(); } If I comment out the above line, the problem seems to be solved. So how to I set the color of the first LED in a 2-LED nano?
__label__pos
0.840082
HomeBlogCybersecurity ExplainedEmployee Cybersecurity Training: How To Empower Your Workforce Employee Cybersecurity Training: How To Empower Your Workforce Employee Cybersecurity Training Banner Introduction to Cybersecurity Training Cybersecurity has become a critical issue for organizations worldwide. With the increasing reliance on digital technology and the internet, businesses face various cyber threats that can lead to significant financial and reputational damage. Employee cybersecurity training is one of the most critical aspects of an effective cybersecurity strategy. Why Employee Cybersecurity Training is Crucial Employees often serve as the first line of defense against cyber threats. As they interact with various digital systems daily, they may encounter potential risks such as phishing emails or malicious websites. Employees not adequately trained in cybersecurity best practices may inadvertently expose the organization to cyberattacks. Types of Cyber Threats Some common cyber threats that organizations face include: • Phishing attacks • Ransomware • Insider threats • Malware • Data breaches. You can read more about cyber threats here. Developing a Cybersecurity Training Program Developing a Cybersecurity Training Program Developing and implementing a robust cybersecurity training program for employees is crucial to safeguard your organization from these risks. Define Training Goals The first step in creating a cybersecurity training program is to define the goals you want to achieve. These goals might include: • Improving employees’ knowledge of cybersecurity risks and best practices • Reducing the likelihood of successful cyberattacks • Ensuring compliance with relevant regulations and industry standards. Analyze Employee Roles and Needs Next, consider your employees’ unique roles and needs. Some employees may require more in-depth training due to their access to sensitive data, while others may need training focused on specific applications or devices. Define Training Goals Defining clear and specific training goals is crucial in creating an effective cybersecurity training program. These goals will serve as the foundation of your program and help you determine the content, structure, and delivery methods of your training. Here are some aspects to consider when defining your training goals: 1. Knowledge Enhancement: Your training program should aim to improve employees’ understanding of cybersecurity risks, best practices, and the consequences of security breaches. This will help them make informed decisions and take appropriate actions to protect your organization’s assets. 2. Behavior Change: Encourage employees to adopt secure behaviors by highlighting the benefits of following best practices and the potential risks of non-compliance. Your training goals should emphasize fostering a security-conscious mindset that prioritizes protecting sensitive data and systems. 3. Compliance with Regulations and Standards: Many industries have specific regulations and standards that organizations must adhere to maintain compliance. Your training goals should include educating employees on these requirements and ensuring they understand their responsibilities in maintaining compliance. 4. Reducing Incident Response Time: A swift response to security incidents can significantly minimize the damage caused by a cyberattack. Your training program should aim to improve employees’ abilities to recognize and report security incidents and equip them with the knowledge to respond effectively. 5. Targeted Training for Different Roles: Employees have different organizational responsibilities, and their training needs may vary accordingly. Your training goals should address the specific needs of various roles, such as IT personnel, management, and customer service representatives. 6. Continuous Improvement: The cybersecurity landscape is constantly evolving, with new threats and vulnerabilities emerging regularly. Your training goals should emphasize the importance of constant learning and staying up-to-date with the latest cybersecurity trends and best practices. By clearly defining your training goals, you can create a focused and effective cybersecurity training program that addresses your organization’s and employees’ unique needs. This will ultimately help to reduce the likelihood of successful cyberattacks and enhance the overall security posture of your organization. Create a Comprehensive Curriculum Develop a comprehensive curriculum that covers all aspects of cybersecurity relevant to your organization. This curriculum should include a mix of technical and non-technical topics to ensure employees gain a well-rounded understanding of cybersecurity best practices. Implementing the Training Program Once your curriculum is developed, it’s time to implement the training program. Delivery Methods Choose a delivery method that best suits your organization’s needs, such as: Measuring Success To gauge the effectiveness of your training program, establish clear metrics for success. These might include: • Increased employee awareness of cybersecurity risks • Decrease in successful cyberattacks or security incidents • Improved compliance with regulations and industry standards. Well-structured Training is Vital for the Success A comprehensive and well-structured curriculum is vital for the success of your employee cybersecurity training program. The curriculum should cover various topics to ensure employees have a well-rounded understanding of cybersecurity best practices and the potential risks they may encounter. Here are some critical steps to consider when creating your curriculum: 1. Identify Core Topics: Start by identifying the core topics most relevant to your organization and industry. These topics should give employees a solid foundation in cybersecurity concepts, best practices, and potential threats. Some examples of core topics include: • Password management, • Email security, • Secure browsing • Mobile device security, • Social engineering awareness • Incident reporting and response 2. Tailor Content to Employee Roles: Different employees have different roles within your organization, and their training needs may vary accordingly. Customize your curriculum to address various employee roles’ unique needs and responsibilities, such as IT personnel, management, and customer service representatives. 3. Incorporate Real-Life Examples: Help employees understand the real-world implications of cybersecurity threats by incorporating case studies, examples, and scenarios into your curriculum. This will enable them to recognize better and respond to similar situations in their day-to-day work. 4. Develop Hands-On Exercises: Include practical, hands-on exercises in your curriculum to help employees apply their knowledge and skills in real-life situations. This can involve simulated phishing attacks, password strength tests, or exercises that teach employees to identify and report suspicious activities. 5. Address Compliance Requirements: If your organization is subject to specific industry regulations or standards, ensure your curriculum covers the relevant compliance requirements. Educate employees on these requirements and their responsibilities in maintaining compliance. 6. Plan for Regular Updates: As the cybersecurity landscape evolves rapidly, it’s essential to keep your curriculum up-to-date with the latest threats, best practices, and technologies. Schedule regular updates to your curriculum and incorporate emerging trends and developments. 7. Assessments and Evaluations: Include assessments and evaluations throughout your curriculum to measure employee understanding and retention of the material. This can help you identify areas that may require additional training or clarification. By creating a comprehensive curriculum that covers a wide range of topics and is tailored to your organization’s and employees’ needs, you can equip your workforce with the knowledge and skills they need to effectively protect your organization’s assets and maintain a strong cybersecurity posture. Best Practices for Cybersecurity Training Best Practices for Employee Cybersecurity Training To ensure your cybersecurity training program is effective, consider the following. Creating engaging training content is crucial for ensuring employees remain interested and motivated throughout the cybersecurity training program. Effective training content makes the learning process more enjoyable, helps improve knowledge retention, and encourages the adoption of secure behaviors. Here are some strategies to make your training content more engaging: 1. Use Interactive Elements: Incorporate interactive elements such as quizzes, polls, and simulations to keep employees actively involved in the learning process. This can help break up long sessions and maintain a high level of engagement. 2. Incorporate Storytelling: Utilize storytelling techniques to make the training content more relatable and memorable. Present cybersecurity concepts and best practices through real-life scenarios or fictional stories that illustrate the potential consequences of security breaches. 3. Leverage Multimedia: Use various multimedia formats such as videos, infographics, and images to present the training content in an engaging and visually appealing manner. This can help cater to different learning preferences and enhance understanding. 4. Gamification: Introduce gamification elements, such as points, badges, and leaderboards, to motivate employees and foster friendly competition. This can help make the training experience more enjoyable and encourage employees to participate actively in the learning process. 5. Encourage Collaboration: Encourage employees to collaborate and share their experiences during the training program. This can be facilitated through group discussions, team exercises, or online forums. Collaboration promotes engagement and allows employees to learn from one another. 6. Personalize the Learning Experience: Customize the training content to address individual employees’ unique needs and interests. This can be achieved through adaptive learning technologies or by offering a choice of learning paths and activities. 7. Provide Feedback and Support: Offer regular feedback and support to employees throughout the training program. This can involve providing personalized feedback on their progress, addressing questions or concerns, and offering additional resources or guidance. Do incorporate these strategies in your employee cybersecurity training. You can create engaging and effective training content that captures employees’ attention and keeps them motivated throughout the cybersecurity training program. This will ultimately lead to better knowledge retention and a more secure workforce equipped to protect your organization’s assets from potential cyber threats. Cybersecurity Awareness Month To protect yourself and your business from cyber attacks, following best practices for cybersecurity is essential. Here you can read more about Cybersecurity Awareness Month and get some tips and recommendations for Cybersecurity Awareness Month 2023. Summary Employee cybersecurity training is an essential component of a comprehensive cybersecurity strategy. Creating a well-rounded curriculum, delivering engaging content, and regularly updating the training program can significantly reduce the likelihood of successful cyberattacks and protect your organization’s valuable assets. Investing in employee training enhances your organization’s security posture and fosters a culture of security awareness that benefits everyone. Read more about cybersecurity on my website. FAQ Why is employee cybersecurity training important? Employee cybersecurity training is crucial because employees often serve as the first defense against cyber threats. A well-trained workforce can help prevent security incidents and protect the organization’s valuable assets. What topics should be included in a cybersecurity training program? Topics in a cybersecurity training program might include password management, email security, mobile device security, social engineering awareness, and incident reporting and response. How often should employee cybersecurity training be updated? Cybersecurity training should be updated regularly, ideally at least once a year, to ensure employees stay informed about the latest threats and best practices. Additionally, periodic follow-ups can help reinforce knowledge and skills. What are some practical methods for delivering cybersecurity training? Some effective delivery methods for cybersecurity training include in-person sessions, online courses and webinars, interactive e-learning modules, and video tutorials. Choose a method that best suits your organization’s needs and resources. How can we measure the success of our cybersecurity training program? To measure the success of your cybersecurity training program, establish clear metrics such as increased employee awareness of cybersecurity risks, a decrease in successful cyberattacks or security incidents, and improved compliance with regulations and industry standards. I'm helping one-person, small businesses, and individuals navigate the complex world of cybersecurity. After working for three decades with cyber and information security, I now write articles on larsbirkeland.com about these topics. Index
__label__pos
0.991737
Definitions for Multimediaˌmʌl tiˈmi di ə, ˌmʌl taɪ- This page provides all possible meanings and translations of the word Multimedia Princeton's WordNet 1. multimedia, multimedia system(noun) transmission that combine media of communication (text and graphics and sound etc.) Wiktionary 1. multimedia(Noun) the use of different media to convey information; text together with audio, graphics and animation, often packaged on CD-ROM with links to the Internet 2. multimedia(Adjective) of, or relating to this combined use of media 3. multimedia(Adjective) of, or relating to an application that can combine such media into an integrated package Freebase 1. Multimedia Multimedia is media and content that uses a combination of different content forms. This contrasts with media that use only rudimentary computer displays such as text-only or traditional forms of printed or hand-produced material. Multimedia includes a combination of text, audio, still images, animation, video, or interactivity content forms. Multimedia is usually recorded and played, displayed, or accessed by information content processing devices, such as computerized and electronic devices, but can also be part of a live performance. Multimedia devices are electronic media devices used to store and experience multimedia content. Multimedia is distinguished from mixed media in fine art; by including audio, for example, it has a broader scope. The term "rich media" is synonymous for interactive multimedia. Hypermedia can be considered one particular multimedia application. U.S. National Library of Medicine 1. Multimedia Materials, frequently computer applications, that combine some or all of text, sound, graphics, animation, and video into integrated packages. (Thesaurus of ERIC Descriptors, 1994) Numerology 1. Chaldean Numerology The numerical value of Multimedia in Chaldean Numerology is: 6 2. Pythagorean Numerology The numerical value of Multimedia in Pythagorean Numerology is: 8 Sample Sentences & Example Usage 1. Added Zadan: It's a multimedia musical number. Images & Illustrations of Multimedia Translations for Multimedia From our Multilingual Translation Dictionary Get even more translations for Multimedia » Translation Find a translation for the Multimedia definition in other languages: Select another language: Discuss these Multimedia definitions with the community: Word of the Day Would you like us to send you a FREE new word definition delivered to your inbox daily? Please enter your email address:      Citation Use the citation below to add this definition to your bibliography: Style:MLAChicagoAPA "Multimedia." Definitions.net. STANDS4 LLC, 2016. Web. 6 May 2016. <http://www.definitions.net/definition/Multimedia>. Are we missing a good definition for Multimedia? Don't keep it to yourself... Nearby & related entries: Alternative searches for Multimedia: Thanks for your vote! We truly appreciate your support.
__label__pos
0.845882
关闭 EJB学习随手笔记 1276人阅读 评论(0) 收藏 举报 分类: 名词: 注解方式: @persistenceContext:持续、存留;环境、上下文; @Stateless: 无状态(无权的) @Remote:  远程接口 一、EJB接口 remote和local的? 二、(Enterprice JavaBeans )EJB基础知识: ①EJB是一个用于分布式业务应用的标准服务端组件模型。采用EJB架构编写的应用是可伸的、事务性的、多用户安全的。一次编写这些应用,  然后部署在任何支持EJB规范的服务器平台,如:JBOSS/WEBLOGIC。 ②EJB定义了三种企业Bean,分别是会话Bean(Session Bean)、实体Bean(Entity Bean)和消息驱动Bean(MessageDriven Bean)。  Session Bean:Session Bean用于实现业务逻辑,分为有状态Bean 和 无状态Bean。                每当客户端请求时,容器就会选择一个Session Bean来客户端服务。             Session Bean可以直接访问数据库,但是更多,会通过Entity Bean            实现数据访问。            @persistenceContext  声明进行操作的Session Bean,对象;  实体Bean:   从名字上我们就能猜到,实体Bean代表真实物体的数据;但是这里可以把             实体Bean看做是用来存放数据的JavaBean,比普通JavaBean多一个功能,不仅             可以存放数据的角色,还要跟数据库进行对象和关系的映射。             @Entity    @Table(name=“表名”)    消息驱动Bean(MDB):是设计用来专门处理基于消息请求的组件。它能收发异步JMS消息,                  并能轻易的与其他EJB进行交互。所以它特别适合用于当一个业务                执行的时间特别长,而执行的结果无需实时向用户反馈的场合。 三、会话Bean(Session bean) 用于实现业务逻辑,分为有状态和无状态两种;每当客户端请求时,容器就会选择一个Session Bean来为客户端服务。 Session Bean作为业务处理对象出现在各种应用体系中; 1、JNDI==JNDI===JNDI↓ 客户需要通过JNDI查找EJB(JSP---》EJB) JNDI:(The Java Naming and Directory InterFace) Java命名和目录的端口,是一组在Java应用中访问命名和目录服务的API。为开发人员提供了查找和访问各种命名和目录服务的通用、统一的形式。借助于 JNDI提供的接口,能够通过名字定为用户、机器、网络、对象服务等。 命名服务:就像DNS一样,通过命名服务器提供服务,大部分的J2EE服务器都含有命名服务器。 目录服务:一种简化的RDBMS系统,通过目录具有的属性保存一些简单的信息。目录服务通过目录服务器实现,比如:微软ACTIVE DIRECTORY等。 JNDI 的好处: (1)包含大量命名和目录服务,可以使用相同API 调用访问任何命名或目录服务。 (2)可以同时连接多个命名和目录服务。 (3)允许把名称同JAVA 对象或资源关联起来,不必知道对象或资源的物理ID。 (4)使用通用接口访问不同种类的目录服务 (5)使得开发人员能够集中使用和实现一种类型的命名或目录服务客户API 上。 什么是上下文:由0或多个绑定构成。比如:java/MySql,java为上下文(context),MySql为命名 什么是子上下文(subContext):上下文下的上下文。比如:MyJNDITree/ejb/helloBean,ejb为子上下文。 JNDI编程过程 因为JNDI是一组接口,所以只需根据接口规范编程就可以。要通过JNDI进行资源访问,必须设置初始化上下文的参数, 主要是设置JNDI驱动的类名(java.naming.factory.initial)和提供命名服务的URL(java.naming.provider.url)。 因Jndi的实现产品有很多。所以java.naming.factory.initial的值因提供JNDI服务器的不同而不同。java.naming.provider.url 的值包括提供命名服务的主机地址和端口号。 下面为访问JBOSS服务器的例子代码: Properties props = new Properties(); props.setProperty("java.naming.factory.initial", "org.jnp.interfaces.NamingContextFactory"); props.setProperty("java.naming.provider.url", "localhost:1099"); InitialContext = new InitialContext(props); //设置JNDI 访问的环境,如果客户端运行在jboss,不需要传入props; HelloWorld helloworld = (HelloWorld) ctx.lookup("HelloWorldBean/remote"); 下面为访问Sun应用服务器的例子代码: Properties props = new Properties(); props.setProperty("java.naming.factory.initial","com.sun.enterprise.naming.SerialInitContextFactory"); props.setProperty("java.naming.provider.url", "localhost:3700"); InitialContext = new InitialContext(props); HelloWorld helloworld = (HelloWorld) ctx.lookup("com.foshanshop.ejb3.HelloWorld"); 下面为访问Webblogic10应用服务器的例子代码: Properties props = new Properties(); props.setProperty("java.naming.factory.initial", "weblogic.jndi.WLInitialContextFactory"); props.setProperty("java.naming.provider.url", "t3://localhost:7001"); InitialContext = new InitialContext(props); HelloWorld helloworld = (HelloWorld) ctx.lookup("HelloWorldBean #com.foshanshop.ejb3.HelloWorld"); JBOSS环境下JNDI树命名约定: ① java:copm 这个上下文环境和其子上下文环境仅能被与之相关的特定组件访问和使用 ② java: 子上下文环境和绑定的对象只能被Jboss服务器虚拟机内的应用访问 ③ 其他上下文环境 只要实现序列化就可以被远程用户调用。 2、无状态bean开发(Stateless Session Beans) 无状态会话bean主要用来实现单次使用的服务,该服务能被启用很多次,但是由于无状态会话Bean并不保留任何有关状态的信息,其效果是每次 调用提供单独的使用,在很多情况下,无状态会话Bean 提供可重用的单次使用服务。 尽管无状态会话Bean 并不为特定的客户维持会话状态,但会有一个以其成员变量形式表示的过度状态。当一个客户调用无状态会话Bean 的方法时, Bean 的成员变量的值只表示调用期间的一个过度状态。当该方法完成时,这个状态不再保留。 除了在方法调用期间,所有的无状态会话Bean 的实例都是相同的,允许EJB 容器给任何一个客户赋予一个实例。许多应用服务器利用这个特点, 共享无状态会话Bean 以获得更好的性能。 由于无状态会话Bean 能够支持多个客户,并且通常在EJB 容器中共享,可以为需要大量客户的应用提供更好的扩充能力。无状态会话Bean 比有状态 会话Bean 更具优势的是其性能,在条件允许的情况下开发人员应该首先考虑使用无状态会话Bean。  @Stateless , @Remote,第一个注释定义这是一个无状态会话Bean,第二个注释指明这个无状态Bean 的remote 接口,指明实现的接口是远程接口, 在使用这两个注释时需要使用一些EJB 的类包,这些类包都可以在jboss 安装目录的client,/server/default/deploy/jboss-aop-jdk50.deployer, /server/default/deploy/ejb3.deployer,/lib/endorsed 等文件夹下找到,或者在源代码的Lib 文件夹下获得。      接口的定义如下:   HelloWorld.java   实现类的命名规则是:接口+Bean ,如: HelloWorldBean       ***************************************************************************************************************************************   在这里作者要重点说明一下Jboss EJB JNDI 名称默认的命名规则,命名规则如下: 1> 如果EJB 打包进后缀为*.ear 的J2EE 发布文件,默认的JNDI 路径名称是 访问本地接口:EAR-FILE-BASE-NAME/EJB-CLASS-NAME/local 访问远程接口:EAR-FILE-BASE-NAME/EJB-CLASS-NAME/remote 例:EJB HelloWorld 打包进名为HelloWorld.ear 的J2EE 应用,访问她远程接口的JNDI 名是: HelloWorld/HelloWorldBean/remote 2> 如果EJB 应用打包成后缀为*.jar 的发布文件, 默认的JNDI 路径名称是 访问本地接口:EJB-CLASS-NAME/local 访问远程接口:EJB-CLASS-NAME/remote 例: HelloWorld 应用打包成HelloWorld.jar 文件,访问她远程接口的JNDI 名称是:HelloWorldBean/remote 另外有一点要注意:EJB-CLASS-NAME 是不带包名的,如com.foshanshop.ejb3.impl.HelloWorldBean 只需取HelloWorldBean。 目前网上很多教材获取JNDI 路径名的方式不适用在jboss 下,如: HelloWorld helloworld = (HelloWorld) ctx.lookup(HelloWorld.class.getName());这种方式适用于Sun ApplicationServer 及glassfish 我们把上面的客户端应用打成war 文件。然后把她拷贝到“[jboss 安装目录]\server\default\deploy”目录下。如果 war文件的文件名为EJBTest.war ,我们可以通过http://localhost:8080/EJBTest/Test.jsp 访问客户端。    **************************************************************************************************************************************** @Local 注释指明实现的接口是本地接口。当@Local 和@Remote 注释都不存在时,会话Bean 实现的接口默认为Local 接口。   如果在本机调用EJB(确保客户端与EJB 容器运行在同一个JVM),采用Local 接口访问EJB 优于Remote 接口,因为Remote   接口访问EJB需要经过远程方法调用(RPCs)环节,而Local接口访问EJB直接从JVM中返回EJB的引用。    ****************************************************************************************************************************************   如果你试图在独立的Tomcat 服务器中执行客户端代码(如何在独立的Tomcat 环境中调用EJB 请考照第二章:在独立的Tomcat 中调用EJB),你将获得如下例外:   java.lang.NullPointerExceptionorg.jboss.ejb3.stateless.StatelessLocalProxy.invoke(StatelessLocalProxy.java:74)产生此例外的原因是,调用Local接口   的客户端与EJB 容器不在同一个VM(虚拟内存堆)。相对于发布到jboss deploy 目录下的客户端应用而言,他与EJB 容器运行在同一个VM。如果客户端与EJB容器   在不同的VM,只能通过其Remote 接口进行访问。          调用Local 接口时,两次累加的结果都不一样,一个是2,一个是4。   这是因为Stateless Session Bean 不负责记录使用者状态,Stateless Session Bean 一旦实例化就被加进会话池中,各个用户都可以共用。即使用户已经消亡,   Stateless Session Bean 的生命期也不一定结束,它可能依然存在于会话池中,供其他用户调用。如果它有自己的属性(变量),那么这些变量就会受到所有   调用它的用户的影响。      在Jboss 网站看到,在EJB3.0 RC9 以上版本中Remote 及Local 接口可以指向同一个业务接口,这样客户端就不会因调用接口的不同而来回切换业务接口类。   当然这种使用场合是在Remote 和Local 的接口方法相同的情况下。    3、有状态Bean开发(Stateful Session Beans) 有状态Bean是一个可以维持自身状态的会话Bean,每个用户都有自己的一个实例,在用户的生命周期内,Stateful Session Bean 保持了用户的信息,即“有状态” 一旦用户灭亡(调用结束或实例结束),Stateful Session Bean的生命周期也告结束。每个用户最初都会得到一个初始的Stateful Session Bean。 stateful session bean 必须实现Serializable 接口,这样EJB 容器才能在她们不再使用时序列化存储她们的状态信息。 @Stateful 注释定义这是一个有状态会话Bean,@Remote注释指明有状态Bean 的remote 接口。      @SuppressWarnings("serial") 注释屏蔽缺少serialVersionUID 定义的警告。       因为stateful session bean 的每个用户都有自己的一个实例,所以两者对stateful session bean 的操作不会影响对方。   另外注意:如果后面需要操作某个用户的实例,你必须在客户端缓存Bean 的Stub 对象(JSP 通常的做法是用Session缓存),   这样在后面每次调用中,容器才知道要提供相同的bean 实例。    4、如何改变Session Bean的JNDI名称 在Jboss 中要自定义JNDI 名称,可以使用@LocalBinding 和@RemoteBinding 注释,@LocalBinding 注释指定Session Bean 的Local 接口的JNDI 名称,   @RemoteBinding 注释指定Session Bean 的Remote 接口的JNDI名称,例子:   @Remote ({Operation.class}) @RemoteBinding (jndiBinding="foshanshop/RemoteOperation") @Local ({LocalOperation.class}) @LocalBinding (jndiBinding="foshanshop/LocalOperation") 在weblogic10 中,你可以通过@Stateless.mappedName()设置全局JNDI名称 @Stateless(mappedName="OperationBeanRemote") 客户端调用EJB 的代码片断如下: InitialContext ctx = new InitialContext(props); Operation operation = (Operation) ctx.lookup("OperationBeanRemote#com.foshanshop.ejb3.Operation"); 5、Session Bean的生命周期: 6、拦截器(Interceptor) 拦截器可以监听程序的一个或所有方法。拦截器对方法调用流提供了细粒度控制。可以在无状态会话bean、有状态会话bean 和消息驱动bean 上使用它们。 拦截器可以是同一bean 类中的方法或是一个外部类。   @Interceptors({HelloInterceptor.class})   public class HelloChinaBean implements HelloChina,HelloChinaRemote {      @Interceptors 注释指定一个或多个在外部类中定义的拦截器。上面拦截器HelloInterceptor 对HelloChinaBean 中的所有方法进行监听。      拦截器HelloInterceptor.java:      public class HelloInterceptor { @AroundInvoke public Object log(InvocationContext ctx) throws Exception { System.out.println("*** HelloInterceptor intercepting"); long start = System.currentTimeMillis(); try{ if (ctx.getMethod().getName().equals("SayHello")){ System.out.println("*** SayHello 已经被调用! *** " ); } if (ctx.getMethod().getName().equals("Myname")){ System.out.println("*** Myname 已经被调用! *** " ); } return ctx.proceed(); }catch (Exception e) { throw e; }finally { long time = System.currentTimeMillis() - start; System.out.println("用时:"+ time + "ms"); } } } @AroundInvoke 注释指定了要用作拦截器的方法。用@AroundInvoke 注释指定的方法必须遵守以下格式: public Object XXX(InvocationContext ctx) throws Exception XXX 代表方法名可以任意。 除了可以在外部定义拦截器之外,还可以将Session Bean 中的一个或多个方法定义为拦截器。下面以前面的HelloChinaBean 为例, 介绍在Session Bean 中如何定义拦截器。 @Stateless   @Remote ({HelloChinaRemote.class})   @Local(HelloChina.class) public class HelloChinaBean implements HelloChina,HelloChinaRemote { public String SayHello(String name) { return name +"说:你好!中国."; } public String Myname() { return "我是佛山人"; } @AroundInvoke public Object log(InvocationContext ctx) throws Exception { try{ if (ctx.getMethod().getName().equals("SayHello")){ System.out.println("*** HelloChinaBean.SayHello() 已经被调用! *** " ); } if (ctx.getMethod().getName().equals("Myname")){ System.out.println("*** HelloChinaBean.Myname() 已经被调用! *** " ); } return ctx.proceed(); }catch (Exception e) { throw e; } } } 上面只需一个@AroundInvoke 注释就指定了要用作拦截器的方法。 7、依赖注入(DI) 使用@EJB 注释,你可以将EJB存根对象注入到任何EJB 3.0 容器管理的POJO 中。如果注释用在一个属性变量上,容器将会在它被第一次访问之前赋值给它。 依赖注入只工作在本地命名服务中,因此你不能注入远程服务器的对象。 @Stateless @Remote ({Injection.class}) public class InjectionBean implements Injection { @EJB (beanName="HelloWorldBean") HelloWorld helloworld; public String SayHello() { return helloworld.SayHello("注入者"); } @EJB 注释的beanName 属性指定EJB 的名称(如果没有设置过@Stateless 或@Stateful 的name 属性,默认为不带包名的类名), 他的另一个属性mappedName 指定EJB 的全局JNDI 名。 下面的片断演示了如何使用beanName 或mappedName 属性查找HelloWorldBean 会话bean public class InjectionBean implements Injection { @EJB (beanName="HelloWorldBean")  //@EJB (mappedName="HelloWorldBean/remote") HelloWorld helloworld; @EJB 注释如果被用在JavaBean 风格的setter 方法上时,容器会在属性第一次使用之前,自动地用正确的参数调用bean 的setter 方法。 public class InjectionBean implements Injection { HelloWorld helloworld; @EJB (beanName="HelloWorldBean") public void setHelloworld(HelloWorld helloworld) { this.helloworld = helloworld; } @EJB 注释只能注入EJB 存根对象,除@EJB 注释之外,EJB 3.0 也支持@Resource 注释来注入来自JNDI 的任何资源。 下面的例子中演示了如何注入数据源。"java:/DefaultMySqlDS"是数据源DefaultMySqlDS 的全局JNDI 名。 public class InjectionBean implements Injection { @EJB(beanName = "HelloWorldBean") HelloWorld helloworld; @Resource(mappedName = "java:/DefaultMySqlDS") DataSource myDb; public String SayHello() { String str = ""; try { Connection conn = myDb.getConnection(); Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery("SELECT studentName FROM student"); if (rs.next()) { str = rs.getString(1); } rs.close(); stmt.close(); } catch (SQLException e) { e.printStackTrace(); } return helloworld.SayHello(str); } 8、定时服务(Timer Service):(60页以后) 定时服务用作在一段特定的时间后执行某段程序,估计各位在不同的场合中已经使用过。 定时服务的开发过程。定时服务的开发过程与会话Bean 的开发过程大致相同,但比会话Bean 多了几个操作, 那就是使用容器对象SessionContext 创建定时器,并使用@Timeout 注释声明定时器方法。 9、安全服务开发: 10、自定义安全域: 四、JMS(Java Message Service)( P75) Java 消息服务(Java Message Service,简称JMS)是企业级消息传递系统,紧密集成于Jboss Server 平台之中。 企业消息传递系统使得应用程序能够通过消息的交换与其他系统之间进行通信。  1、消息组成 消息传递系统的中心是消息。一条Message分为三个组成部分。 ① 头:是个标准字段集,客户机和供应商都用它来标识和路由信息。 ② 属性(property):支持把可选头字段添加到消息。如果您的应用程序需要不使用标准头字段对消息编目和分类,您就可以添加一个属性 到消息以实现这个编目和分类。提供set<Type>Property(...)和get<Type>Property(...)方法设置和获取各种Java类型的属性,包括Object。 ③ 消息的主体:  2、消息的传递模式 点对点(PTP):一条消息只能传递给一个接收方; 发布/订阅(pub/sub):一条消息传递给多个接收方; 1、消息驱动Bean(Message Driven Bean): 消息驱动Bean(MDB)是设计来专门处理基于消息请求的组件。它是一个异步的无状态Session Bean,客户端调用MDB后无需等待,立刻返回, MDB将异步处理客户的请求。一个MDB类必须实现MessageListener接口。当容器检测到bean守候的队列一条信息时,就调用onMessage()方法, 将消息作为参数传入。MDB在OnMessage()中决定如何处理该消息。 也可用注释来配置MDB监听哪一队列。当MDB部署时,容器将会用到其中的注释消息。 当一个业务执行的时间很长,而执行结果无需实时向用户反馈时,很适合使用消息驱动Bean。如:订单成功后会给用户发一条短信等。 2、实体Bean(Entity Bean) 持久化是位于JDBC之上的一个更高层抽象。持久层将对象映射到数据库,以便在查询、装载、更新、或删除对象的时候,无需使用像 JDBC那样繁琐的API。在EJB得到早期版本中,持久化是EJB平台的一部分。从EJB 3.0开始,持久化已经自成规范,被称为Java Persistence API。 Java Persistence API 定义了一种方法,可以将常规的普通Java对象(有时被称作POJO)映射到数据库。这些普通Java对象被称作 entity bean.除了是用Java Persistence元数据将其映射到数据库外,entity bean与其他Java类没有任何区别。事实上,创建一个 Entity Bean 对象相当于新建一条记录,删除一个Entity Bean 会同时从数据库中删除对应记录,修改一个Entity Bean 时,容器会 自动将Entity Bean 的状态和数据库同步。 Java Persistence API还定义了一种查询语言(JPQL),具有与SQL相类似的特征,只不过做了裁减,以便处理Java对象而非原始的关系schema。  注释: @Entity 注释指明这是一个实体Bean,@Table注释指定了entity 所要映射的数据库表,其中@Table.name()用来指定映射表的表名。如果  缺省@Table注释,系统默认采用类名作为映射表的表名。实体Bean的每个实例代表数据表中的一行数据,行中的一列对应实例中的一个属性。 @javax.persistence.Column 注释定义了将成员属性映射到关系表中的哪一列和该列的一些结构信息(如列名是否唯一,是否允许为空,是否允许更新等), 他的属性介绍如下: ·name:  映射的列名。如:映射Person 表的PersonName 列,可以在name 属性的getName 方法上面加入 @Column(name = "PersonName"),如果不指定映射列名,容器将属性名称作为默认的映射列名。 ·unique: 是否唯一  ·nullable: 是否允许为空  ·length:  对于字符型列,length 属性指定列的最大字符长度  ·insertable: 是否允许插入  ·updatable: 是否允许更新  ·columnDefinition: 定义建表时创建此列的DDL  ·secondaryTable: 从表名。如果此列不建在主表上 (默认建在主表),该属性定义该列所在从表的名字。  @Id  注释指定personid 属性为表的主键,它可以有多种生成方式:      ① ·TABLE:容器指定用底层的数据表确保唯一。 @TableGenerator(name="Person_GENERATOR",//为该生成方式取个名称                     table= "Person_IDGenerator",//生成ID的表                     pkColumnName= "PRIMARY_KEY_COLUMN",//主键列的名称                     valueColumnName= "VALUE_COLUMN",//存放生成ID值的列的名称                     pkColumnValue= "personid",//主键列的值(定位某条记录)                     allocationSize=1)//递增值  @Id  @GeneratedValue(strategy=GenerationType.TABLE, generator "Person_GENERATOR")  public Integer getPersonid() {  return personid;       ② ·SEQUENCE:使用数据库的SEQUENCE 列来保证唯一(Oralce 数据库通过序列来生成唯一ID)   @SequenceGenerator(name ="Person_SEQUENCE",//为该生成方式取个名称                      sequenceName= "Person_SEQ")//sequence的名称(如果不存在,会自动生成)     public classPerson implementsSerializable{     @Id     @GeneratedValue(strategy=GenerationType.SEQUENCE, generator ="Person_SEQ")     public Integer getPersonid() {        return personid;    }       ③ ·IDENTITY:使用数据库的INDENTIT 列来保证唯一(像mysql,sqlserver 数据库通过自增长来生成唯一ID)  ·AUTO:由容器挑选一个合适的方式来保证唯一(由容器决定采用何种方式生成唯一主键,hibernate 会根据  数据库类型选择适合的生成方式,相反toplink 就不是很近人情)  ·NONE :容器不负责主键的生成,由调用程序来完成。   @GeneratedValue 注释定义了标识字段的生成方式。   注:实体bean需要在网络上传送时必须实现Serializable接口,否则将引发java.io.InvalidClassException例外。    @Temporal 注释用来指定java.util.Date或java.util.Calendar属性与数据库类型date,time 或timestamp中的那一种类型进行映射。 @Temporal(value=TemporalType.DATE) public Date getBirthday() { return birthday; }   Session Bean 我觉得就是dao层(接口有方法),而实现session bean 就是daoImpl(实现接口,实现方法),其中,实现Session Bean其中 加入了一个对象EntityManager em,EntityManager是由EJB容器自动地管理和配置的,不需要用户自己创建,他用作操作实体 Bean。 @PersistenceContext         protected EntityManager em; 查找用户用em.find()方法,更新用em.merge(),添加用em.persist(),查所有人按id排序: public List getPersonList() {  Query query  em.createQuery("from Person order by personid asc");  List list  =query.getResultList();  return list;          在类中并没有看到对 EntityManager em进行赋值,后面却可以直接使用他。这是因为容器在实例化SessionBean 后,就通过@PersistenceContext  注释动态注入 EntityManager 对象。 如果 persistence.xml文件中配置了多个不同的持久化内容。在注入 EntityManager 对象时必须指定持久化名称,可以通过@PersistenceContext 注释的 unitName 属性进行指定,如果只有一个持久化内容配置,不需要明确指定。 @PersistenceContext(unitName="foshanshop") EntityManager em; persistence.xml文件的配置:↓↓↓   2.1 持久化persistence.xml配置文件 一个实体Bean应用由实体类和persistence.xml文件组成.persisence.xml文件的META-INF目录。persistence.xml文件指定实体Bean 使用的数据源及EntityManager对象的默认行为。persistence.xml文件的配置说明如下: <persistence>  <persistence-unit name="foshanshop">  <jta-data-source>java:/DefaultMySqlDS</jta-data-source>  <properties>  <property name="hibernate.hbm2ddl.auto" value="create-drop"/>  </properties>  </persistence-unit>  </persistence> persistence-unit节点可以有一个或多个,每个persistence-unit节点定义了持久化内容名称、使用的数据源及持久化产品专有属性。name属性 定义持久化名称。jta-data-source节点指定实体Bean使用的数据源JNDI 名称(如何配置数据源请参考下节 “Jboss数据源的配置”),如果应用 发布在jboss下数据源名称必须带有java:/前缀,数据源名称大小写敏感。properties节点用作指定持久化产品的各项属性,各个应用服务器使用 的持久化产品都不一样如Jboss使用Hibernate,weblogic10 使用Kodo,glassfish/sun application server/Oralce 使用Toplink。因为jboss采 用Hibernate,Hibernate有一项属性hibernate.hbm2ddl.auto,该属性指定实体Bean发布时是否同步数据库结构,如果hibernate.hbm2ddl.auto的 值设为create-drop,在实体Bean 发布及卸载时将自动创建及删除相应数据库表(注意:Jboss服务器启动或关闭时也会引发实体Bean的发布及卸载)。 TopLink 产品的toplink.ddl-generation 属性也起到同样的作用。关于hibernate的可用属性及默认值你可以在 [Jboss 安装目录]  \server\default\deploy\ejb3.deployer\META-INF/persistence.properties 文件中看见.  小提示:如果你的表已经存在,并且想保留数据,发布实体bean 时可以把hibernate.hbm2ddl.auto 的值设为none或update, 以后为了实体bean的 改动能反应到数据表,建议使用update,这样实体Bean 添加一个属性时能同时在数据表增加相应字段。   属性映射: 如果不想让一些成员属性映射成数据库字段,我们可以使用@Transient注释进行标注。 如果你想映射枚举对象到数据库就需要使用@Enumerated 注释进行标注。 @Enumerated(EnumType.STRING) public CommentType getType() { return type; } @Lob 注释用作映射这些大数据类型,当属性的类型为 byte[], Byte[]或 java.io.Serializable 时,@Lob 注释将映射为数据库的 Blob 类型, 当属性的类型为 char[],Character[]或 java.lang.String 时,@Lob 注释将映射为数据库的 Clob 类型。 @Lob 注释的大数据类型,为了避免每次加载实体时占用大量内存,我们有必要对该属性进行延时加载,这时我们需要用到@Basic注释。 public @interface Basic{ FetchType fetch( ) default EAGER; boolean optional( ) default true; } FetchType 属性指定是否延时加载,默认为立即加载,optional属性指定在生成数据库结构时字段能否为 null @Lob @Basic(fetch=FetchType.LAZY) public String getContent() { return content; }   持久化管理器EntityManager: EntityManager 常用的 API: ① Entity获取 find()或 getReference() 当在数据库中没有找到记录时,getReference()和 find()是有区别的,find()方法会返回 null,而 getReference()方法 会抛出 javax.persistence.EntityNotFoundException 例外,另外 getReference()方法不保证实体 Bean 已被初始化。 如果传递进 getReference()或 find()方法的参数不是实体 Bean,都会引发 IllegalArgumentException 例外。 ② 添加persist() 如果传递进 persist()方法的参数不是实体 Bean,会引发 IllegalArgumentException 例外。 ③ 更新实体 当实体正在被容器管理时,你可以调用实体的 set方法对数据进行修改,在容器决定 flush 时,更新的数据才会同 步到数据库。如果你希望修改后的数据实时同步到数据库,你可以执行 EntityManager.flush()方法。 ④ 合并Merge() merge ()方法是在实体Bean 已经脱离了EntityManager 的管理时使用,当容器决定 flush 时,数据将会同步到数据库中。   执行 em.merge(person)方法时,容器的工作规则: 1.如果此时容器中已经存在一个受容器管理的具有相同 ID 的 person 实例,容器将会把参数 person 的内容拷贝   进这个受管理的实例, merge()方法返回受管理的实例, 但参数 person仍然是分离的不受管理的。 容器在决定 Flush   时把实例同步到数据库中。 2.容器中不存在具有相同 ID 的 person 实例。容器根据传进的 person 参数 Copy出一个受容器管理的 person 实   例,同时 merge()方法会返回出这个受管理的实例,但参数 person 仍然是分离的不受管理的。容器在决定 Flush   时把实例同步到数据库中。 如果传递进 merge ()方法的参数不是实体 Bean,会引发一个 IllegalArgumentException 例外。 ⑤ 删除 Remove() em.remove (person); //如果级联关系cascade=CascadeType.ALL, 在删除 person 时候, 也会把级联对象删除。 把 cascade属性设为  cascade=CascadeType.REMOVE 有同样的效果。 如果传递进 remove ()方法的参数不是实体 Bean,会引发一个 IllegalArgumentException 例外。 ⑥ 执行 JPQL 操作 createQuery() 通过JPQL得到实体Bean。要执行 JPQL语句,你必须通过 EntityManager的createQuery()或 createNamedQuery()方法 创建一个 Query对象。        Query query = em.createQuery("select p from Person p where p. name=’ 黎明’"); List result = query.getResultList(); Iterator iterator = result.iterator(); while( iterator.hasNext() ){ //处理 Person } Query query = em.createQuery("update Person as p set p.name =?1 where p. personid=?2"); query.setParameter(1, “ 黎明”); query.setParameter(2, new Integer(1) ); int result = query.executeUpdate(); //影响的记录数 // 执行更新语句 Query query = em.createQuery("delete from Person"); int result = query.executeUpdate(); //影响的记录数 ⑦ 执行 SQL 操作 createNativeQuery() 注意这里操作的是 SQL语句,并非 JPQL,千万别搞晕了。 Query query = em.createNativeQuery("select * from person", Person.class); List result = query.getResultList(); if (result!=null){ Iterator iterator = result.iterator(); while( iterator.hasNext() ){ Person person= (Person)iterator.next(); … .. } // 直接通过 SQL执行更新语句 Query query = em.createNativeQuery("update person set age=age+2"); query.executeUpdate(); ⑧ 刷新实体 refresh() 当前被管理的实体已经不是数据库中最新的数据,你可以通过 refresh()方法刷新实体,容器会把数据 库中的新值重写进实体。这种情况一般发生在你获取了实体之后,有人更新了数据库中的记录,这时你需要得到 最新的数据。当然你再次调用 find()或 getReference()方法也可以得到最新数据,但这种做法并不优雅。 如果传递进 refresh ()方法的参数不是实体 Bean,会引发一个 IllegalArgumentException 例外。 ⑨ 检测实体当前是否被管理中 contains() contains()方法使用一个实体作为参数,如果这个实体对象当前正被持久化内容管理,返回值为 true,否则为 false。 如果传递的参数不是实体 Bean,将会引发一个 IllegalArgumentException 例外。 ⑩ 分离所有当前正在被管理的实体 clear() 在处理大量实体的时候,如果你不把已经处理过的实体从 EntityManager 中分离出来,将会消耗你大量的内存。 调用 EntityManager 的 clear()方法后,所有正在被管理的实体将会从持久化内容中分离出来。 在事务没有提交前(事务默认在调用堆栈的最后提交,如:方法的返回),如果调用 clear()方法, 之前对实体所作的任何改变将会掉失,所以建议你在调用 clear()方法之前先调用 flush()方法保存更改。        em.flush();//手动将更新立刻刷新进数据库; (11)改变实体管理器的 Flush模式 setFlushMode() 对 Flush 模式进行修改需要使用到 javax.persistence.FlushModeType 默认情况下,实体管理器的 Flush 模式为 AUTO,你可以改变他的值,如下:entityManager.setFlushMode(FlushModeType.COMMIT); FlushModeType.AUTO: 刷新在查询语句执行前(除了find()和 getreference()查询)或事务提交时才发生,使用场合:    在大量更新数据的过程中没有任何查询语句(除了 find()和 getreference()查询)的执行。 FlushModeType.COMMIT:刷新只有在事务提交时才发生,使用场合:在大量更新数据的过程中存在查询语句(除    了 find()和 getreference()查询)的执行。 JDBC驱动跟数据库交互的次数。JDBC性能最大的增进是减少JDBC驱动与数据库之间的网络通讯。 FlushModeType.COMMIT 模式使更新只在一次的网络交互中完成,而 FlushModeType.AUTO 模式可能需要多次交互(触发了多少次 Flush 就产生了多少次网络交互). (12)获取持久化实现者的引用 getDelegate( ) 用过 getDelegate( )方法,你可以获取 EntityManager 持久化实现者的引用,如 Jboss EJB3 的持久化产品采用 Hibernate,可以通过 getDelegate( ) 方法获取对他的访问,如: @PersistenceContext protected EntityManager em; HibernateEntityManager manager = (HibernateEntityManager)em.getDelegate(); 获得对 Hibernate 的引用后,可以直接面对 Hibernate 进行编码,不过这种方法并不可取,强烈建议不要使用。 在 Weblogic 中,你也可以通过此方法获取对 Kodo 的访问。   关系对象映射 ① 映射的表名或列名与数据库保留字同名时的处理 如果应用采用的数据库是Mysql,当映射的表名或列名与数据库保留字同名时,持久化引掣转绎后的 SQL在执行 时将会出错。 采用了一种变通的方法来解决此问题。该方法针对具体数据库,不利于数据库移植。建议大家在不得已的情况下使用。 可以用``字符把 Order括起来。@Table(name = "`Order`");如果数据库是 Sqlserver 可以用 [] 把表名或列名括起来。 Sqlserver不加[]也能执行成功,建议在出错的情况下使用。 ② 一对多及多对一映射 双向一对多关系,一是关系维护端(owner side),多是关系被维护端(inverse side)。在关系被维护端建立外键列 指向关系维护端的主键列。 @OneToMany(mappedBy="order",cascade = CascadeType.ALL, fetch = FetchType.LAZY) @OrderBy(value = "id ASC") public Set<OrderItem> getOrderItems() { return orderItems; } 下面是@OneToMany注释的属性介绍: (1) targetEntity    Class类型的属性。    定义关系类的类型,默认是该成员属性对应的类类型,所以通常不需要提供定义。 (2) mappedBy    String 类型的属性。    定义类之间的双向关系。如果类之间是单向关系,不需要提供定义,如果类和类之间形成双向关系,我们就需要    使用这个属性进行定义,否则可能引起数据一致性的问题。 (3) cascade    CascadeType[]类型。    该属性定义类和类之间的级联关系。定义的级联关系将被容器视为对当前类对象及其关联类对象采取相同的操    作,而且这种关系是递归调用的。    cascade的值只能从 CascadeType.PERSIST (级联新建)、 CascadeType.REMOVE (级联删除)、 CascadeType.REFRESH   (级联刷新)、CascadeType.MERGE(级联更新)中选择一个或多个。还有一个选择是使用CascadeType.ALL,表    示选择全部四项。        (4) fetch    FetchType 类型的属性。    可选择项包括: FetchType.EAGER和 FetchType.LAZY。 前者表示关系类(本例是OrderItem类)在主类(本例是 Order    类)加载的时候同时加载,后者表示关系类在被访问时才加载。默认值是 FetchType. LAZY。    @OrderBy(value = "id ASC")注释指明加载 OrderItem时按 id 的升序排序 @ManyToOne(cascade=CascadeType.REFRESH,optional=false) @JoinColumn(name = "order_id") public Order getOrder() { return order; } @ManyToOne 注释有四个属性:targetEntity、cascade、fetch 和 optional,前三个属性的具体含义和@OneToMany 注释的同名属性相同,但@ManyToOne 注释的 fetch 属性默认值是 FetchType.EAGER。 optional属性是定义该关联类是否必须存在. 值为false时,关联类双方都必须存在,如果关系被维护端不存在,查询的结果为 null。 值为 true 时, 关系被维护端可以不存在,查询的结果仍然会返回关系维护端,在关系维护端中指向关系被维护端的属性为 null。 optional属性的默认值是true。optional属性实际上指定关联类与被关联类的 join 查询关系, 如:optional=false时,join查询关系为inner join; optional=true时,join查询关系为left join。 public class OrderItem implements Serializable { @ManyToOne(cascade=CascadeType.REFRESH,optional=false) @JoinColumn(name = "order_id") public Order getOrder() { return order; } //获取OrderItem时的SQL为:select * from OrderItem item inner join Orders o on o.order_id=item.id, OrderItem表与orders表都必须有关联记录时,查询结果才有记录。 @ManyToOne(cascade=CascadeType.REFRESH,optional=true) @JoinColumn(name = "order_id") public Order getOrder() { return order; } //获取OrderItem时的SQL为:select * from OrderItem item left outer join Orders o on o.order_id=item.id 如果orders表没有记录,OrderItem表有记录,查询结果仍有记录。 @JoinColumn(name = "order_id")注释指定 OrderItem 映射表的 order_id 列作为外键与 Order 映射表的主键列关联。 注意:当业务方法需要把一个实体Bean作为参数返回给客户端时,除了实体 Bean 本身需要实现Serializable 接口之外,       如果关联类(OrderItem)是延迟加载,还需在返回实体Bean 之前通过访问关联类的方式加载关联类。否则在客户端      访问关联类时将会抛出加载例外。另外不管是否延迟加载,通过 join fetch 关联语句都可显式加载关联类,如业务方法 getAllOrder。      ③ 一对一映射 一对一关系需要在关系维护端(owner side)的@OneToOne注释中定义 mappedBy属性。在关系被维护端(inverse side)建立外键列指向关系维护端的主键列。 关系维护端: @OneToOne(optional = true,cascade = CascadeType.ALL, mappedBy = "person") public IDCard getIdcard() { return idcard; } public void setIdcard(IDCard idcard) { this.idcard = idcard; } @OneToOne注释五个属性:targetEntity、cascade、fetch、optional和 mappedBy, 前四个属性的具体含义与@ManyToOne注释的同名属性一一对应, fetch 属性默认值是 FetchType.EAGER。 mappedBy属性的具体含义与@OneToMany注释的同名属性相同。 关系被维护端: optional = true 设置 idcard 属性可以为 null,也就是允讦没有身份证,未成年人就是没有身份证的。 @OneToOne(optional = false, cascade = CascadeType.REFRESH) @JoinColumn(name = "Person_ID", referencedColumnName = "personid",unique = true) public Person getPerson() { return person; } public void setPerson(Person person) { this.person = person; } IDCard 是关系被维护端,optional = false设置person 属性值不能为null,也就是身份证必须有对应的主人。 @JoinColumn(name = "Person_ID", referencedColumnName="personid",unique = true)指明IDCard对应表的 Person_ID列作为外键与Person对应表的personid列进行关联, unique= true 指明 Person_ID 列的值不可重复。 ④ 多对多映射: 多对多映射采取中间表连接的映射策略,建立的中间表将分别引入两边的主键作为外键。 EJB3 对于中间表的元数据提供了可配置的方式,用户可以自定义中间表的表名,列名。 Student端: @ManyToMany(mappedBy = "students") public Set<Teacher> getTeachers() { return teachers; } public void setTeachers(Set<Teacher> teachers) { this.teachers = teachers; } @ManyToMany 注释表示 Student 是多对多关系的一边,mappedBy 属性定义了 Student 为双向关系的维护端(owning side)。 Teacher端: @ManyToMany(cascade = CascadeType.PERSIST, fetch = FetchType.LAZY) @JoinTable(name = "Teacher_Student",   joinColumns = {@JoinColumn(name = "Teacher_ID", referencedColumnName = "teacherid")},   inverseJoinColumns = {@JoinColumn(name = "Student_ID", referencedColumnName ="studentid")}) public Set<Student> getStudents() { return students; } @ManyToMany 注释表示 Teacher是多对多关系的一端。 @JoinTable 描述了多对多关系的数据表关系。name 属性指定中间表名称,joinColumns 定义中间表与 Teacher表的外键关系。 上面的代码中,中间表 Teacher_Student的Teacher_ID列是Teacher表的主键列对应的外键列 inverseJoinColumns 属性定义了中间表与另外一端(Student)的外键关系。 ⑤ 使用参数查询 参数查询也和 SQL中的参数查询类似。EJB3 QL支持两种方式的参数定义方式: 命名参数和位置参数。在同一个 查询中只允许使用一种参数定义方式。 a、命名参数查询 //获取指定 personid 的人员 Query query = em.createQuery("select p from Person p where p.personid=:Id"); query.setParameter("Id",new Integer(1)); List result = query.getResultList(); b、位置参数查询 //获取指定 personid 的人员 Query query = em.createQuery("select p from Person p where p.personid=?1"); query.setParameter(1,new Integer(1)); List result = query.getResultList(); c、Date参数 如果你需要传递 java.util.Date 或 java.util.Calendar 参数进一个参数查询,你需要使用一个特殊的 setParameter() 方法,相关的 setParameter方法定义如下: //命名参数查询时使用,参数类型为 java.util.Date Query setParameter(String name, java.util.Date value, TemporalType temporalType); //命名参数查询时使用,参数类型为 java.util.Calendar Query setParameter(String name, Calendar value, TemporalType temporalType); //位置参数查询时使用,参数类型为 java.util.Date Query setParameter(int position, Date value, TemporalType temporalType); //位置参数查询时使用,参数类型为 java.util.Calendar Query setParameter(int position, Calendar value, TemporalType temporalType); 因为一个Date 或 Calendar对象能够描述一个真实的日期、 时间或时间戳.所以我们需要告诉Query对象怎么使用 这些参数,我们把 javax.persistence.TemporalType 作为参数传递进 setParameter方法,告诉查询接口在转换 java.util.Date 或 java.util.Calendar 参数到本地 SQL时使用什么数据库类型。 ⑥ JPQL语言 Java Persistence API定义了一种查询语言,具有与SQL相类似的特征,JPQL是完全面向对象的,具备继承、多态和关联等特性。 a、大小写敏感性 除了Java 类和属性名称外,查询都是大小写不敏感的。所以,SeLeCT 和 sELEct 以及SELECT相同的, 但是 com.foshanshop.ejb3.bean.Person 和 com.foshanshop.ejb3.bean.PERSon 是不同的, person.name 和 person.NAME 也是不同的。 b、命名查询 在实体 bean 上预先定义一个或多个查询语句,减少每次因书写错误而引起的 BUG。通常把经常使用的查 询语句定义成命名查询,代码如下: @NamedQuery(name="getPerson", query= "FROM Person WHERE personid=?1") @Entity @Table(name = "Person") public class Person implements Serializable{ 定义多个命名查询,应在@javax.persistence.NamedQueries注释里定义@NamedQuery,代码如下: @NamedQueries({ @NamedQuery(name="getPerson", query= "FROM Person WHERE personid=?1"), @NamedQuery(name="getPersonList", query= "FROM Person WHERE age>?1") }) @Entity @Table(name = "Person") public class Person implements Serializable{ 当命名查询定义好了之后,我们就可以通过名称执行其查询。代码如下: Query query = em.createNamedQuery("getPerson"); query.setParameter(1, 1); c、排序(order by) EJB3 QL中默认为 asc 升序。 //先按年龄降序排序,然后按出生日期升序排序 Query query = em.createQuery("select p from Person p order by p.age desc, p.birthday asc"); d、查询部分属性 在前面的例子中,都是对针对Entity类的查询,返回的也是被查询的Entity类的实体。EJB3 QL也允许我们直接 查询返回我们需要的属性,而不是返回整个 Entity。在一些 Entity 中属性特别多的情况,这样的查询可以提高性能。 //直接查询我们感兴趣的属性(列) Query query = em.createQuery("select p.personid, p.name from Person p order by p.personid desc "); //集合中的元素不再是 Person,而是一个 Object[]对象数组 List result = query.getResultList(); StringBuffer out = new StringBuffer("*************** QueryPartAttribute 结果打印****************<BR>"); if (result!=null){ Iterator iterator = result.iterator(); while( iterator.hasNext() ){ //取每一行 Object[] row = ( Object[]) iterator.next(); //数组中的第一个值是 personid int personid = Integer.parseInt(row[0].toString()); String PersonName = row[1].toString(); out.append("personid="+ personid+ "; Person Name="+PersonName+ "<BR>"); e、查询中使用构造器(Constructor) EJB3 QL支持将查询的属性结果直接作为一个 java class 的构造器参数,并产生实体作为结果返回。 f、聚合查询(Aggregation) 目前 EJB3 QL支持的聚合函数包括: 1. AVG() 2. SUM() 3. COUNT() ,返回类型为 Long,注意 count(*)语法在 hibernate 中可用,但在 toplink 其它产品中并不可用 4. MAX() 5. MIN() 和 SQL一样,如果聚合函数不是 select...from的唯一一个返回列,需要使用"GROUP BY"语句。"GROUP BY"应 该包含 select语句中除了聚合函数外的所有属性。 //返回男女生各自的总人数 Query query = em.createQuery("select p.sex, count(p) from Person p group by p.sex"); //集合中的元素不再是 Person,而是一个 Object[]对象数组 List result = query.getResultList(); StringBuffer out = new StringBuffer("*************** QueryGroupBy 结果打印*********<BR>"); if (result!=null){ Iterator iterator = result.iterator(); while( iterator.hasNext() ){ //取每一行 Object[] row = (Object[]) iterator.next(); //数组中的第一个值是 sex boolean sex = Boolean.parseBoolean(row[0].toString()); //数组中的第二个值是聚合函数 COUNT 返回值 String sextotal = row[1].toString(); out.append((sex ? "男生":"女生")+ "总共有"+ sextotal+ "人<BR>"); } } return out.toString(); 注意:如果还需要加上查询条件,需要使用"HAVING"条件语句而不是"WHERE"语句。 //返回人数超过 1 人的性别 Query query = em.createQuery("select p.sex, count(p) from Person p group by p.sex having count(*)>?1"); //设置查询中的参数 query.setParameter(1, new Long(1)); //集合中的元素不再是 Person,而是一个 Object[]对象数组 g、关联(join) left out join/left join:left out join/left join 等,都是允许符合条件的右边表达式中的 Entiies为空。 inner join:inner join 要求右边的表达式必须返回 Entities。 left join/inner join fetch:left/left out/inner join fetch提供了一种灵活的查询加载方式来提高查询的性能。在默认的查询中,    Entity中的集合属性默认不会被关联,集合属性默认是缓加载( lazy-load )。 h、排除相同的记录 DISTINCT Query query = em.createQuery("select DISTINCT o from Order o inner join fetch o.orderItems order by o.orderid"); i、比较 Entity 在查询中使用参数查询时,参数类型除了 String, 原始数据类型( int, double 等)和它们的对象类型( Integer, Double 等),也可以是 Entity的实例。 j、批量更新(Batch Update) private String QueryBatchUpdate(){ //把所有订单的金额加 10 Query query = em.createQuery("update Order as o set o.amount=o.amount+10"); //update 的记录数 int result = query.executeUpdate(); k、批量删除(Batch Remove) Query query = em.createQuery("delete from OrderItem item where item.order in(from Order as o where o.amount<100)"); query.executeUpdate(); query = em.createQuery("delete from Order as o where o.amount<100"); int result = query.executeUpdate();//delete的记录数 l、使用操作符 NOT //查询除了指定人之外的所有订单 Query query = em.createQuery("select o from Order o where not(o.ower =?1) order by o.orderid"); Person person = new Person(); person.setPersonid(new Integer(2)); //设置查询中的参数 query.setParameter(1,person); List result = query.getResultList(); m、使用操作符 BETWEEN //查询金额在 300 到 1000 之间的订单 Query query = em.createQuery("select o from Order as o where o.amount between 300 and 1000"); List result = query.getResultList(); n、使用操作符 IN //查找年龄为 26,21 的 Person Query query = em.createQuery("select p from Person as p where p.age in(26,21)"); List result = query.getResultList(); o、使用操作符 LIKE //查找以字符串"li"开头的 Person Query query = em.createQuery("select p from Person as p where p.name like 'li%'"); List result = query.getResultList(); //可以结合 NOT 一起使用,比如查询所有 name 不以字符串"ming"结尾的 Person query = em.createQuery("select p from Person as p where p.name not like '%ming'"); result = query.getResultList(); p、使用操作符 IS NULL //查询含有购买者的所有 Order Query query = em.createQuery("select o from Order as o where o.ower is not null order by o.orderid"); List result = query.getResultList(); //查询没有购买者的所有 Order query = em.createQuery("select o from Order as o where o.ower is null order by o.orderid"); result = query.getResultList(); q、使用操作符 IS EMPTY IS EMPTY 是针对集合属性(Collection)的操作符。可以和 NOT 一起使用。低版权的 Mysql不支持 IS EMPTY。 //查询含有订单项的所有 Order Query query = em.createQuery("select o from Order as o where o.orderItems is not empty order by o.orderid"); List result = query.getResultList(); //查询没有订单项的所有 Order query = em.createQuery("select o from Order as o where o.orderItems is empty order by o.orderid"); r、使用操作符 EXISTS [NOT]EXISTS 需要和子查询配合使用。注:低版权的 Mysql不支持 EXISTS //如果存在订单号为 1 的订单,就获取所有 OrderItem Query query = em.createQuery("select oi from OrderItem as oi where exists (select o from Order o where o.orderid=1)"); //如果不存在订单号为 10 的订单,就获取 id 为 1 的 OrderItem query = em.createQuery("select oi from OrderItem as oi where oi.id=1 and not exists (select o from Order o where o.orderid=10)"); result = query.getResultList(); s、字符串函数 EJB3 QL定义了内置函数方便使用。这些函数的使用方法和 SQL中相应的函数方法类似。 符串函数包括: 1. CONCAT 字符串拼接 2. SUBSTRING 字符串截取 3. TRIM 去掉空格 4. LOWER 转换成小写 5. UPPER 装换成大写 6. LENGTH 字符串长度 7. LOCATE 字符串定位 //查询所有人员,并在姓名后面加上字符串"_foshan" Query query = em.createQuery("select p.personid, concat(p.name, '_foshan') from Person as p"); List result = query.getResultList(); //查询所有人员,并在姓名后面加上字符串"_foshan" Query query = em.createQuery("select p.personid, concat(p.name, '_foshan') from Person as p"); List result = query.getResultList(); t、计算函数 ABS 绝对值 SQRT 平方根 MOD 取余数 SIZE 取集合的数量 //查询所有 Order的订单号及其订单项的数量 Query query = em.createQuery("select o.orderid, size(o.orderItems) from Order as o group by o.orderid"); List result = query.getResultList(); //查询所有 Order的订单号及其总金额/10 的余数 query = em.createQuery("select o.orderid, mod(o.amount, 10) from Order as o"); result = query.getResultList(); u、子查询 子查询可以用于 WHERE和 HAVING 条件语句中。注:低版权的 Mysql不支持子查询。 //查询年龄为 26 岁的购买者的所有 Order Query query = em.createQuery("select o from Order as o where o.ower in(select p from Person as p where p.age =26) order by o.orderid"); List result = query.getResultList(); 分***页 v、结果集分页 QueryAPI有两个接口方法可以解决这个问题:setMaxResults( ) 和 setFirstResult( ) setMaxResults 方法设置获取多少条记录 setFirstResult 方法设置从结果集中的那个索引开始获取(假如返回的记录有3 条,容器会自动为记录编上索引, 索引从 0 开始,依次为 0,1,2) public List getPersonList(int max,int whichpage) { try { int index = (whichpage-1) * max; Query query = em.createQuery("from Person p order by personid asc"); List list = query.setMaxResults(max).setFirstResult(index).getResultList(); em.clear();//分离内存中受EntityManager管理的实体bean,让VM进行垃圾回收 return list; } catch (Exception e) { e.printStackTrace(); return null; } } JSP 客户端调用代码片断: <%@ page contentType="text/html; charset=GBK"%> <%@ page import="com.foshanshop.ejb3.PersonDAO, com.foshanshop.ejb3.bean.Person, javax.naming.*, java.util.Properties, java.util.List, java.util.Iterator"%> <% Properties props = new Properties(); props.setProperty("java.naming.factory.initial","org.jnp.interfaces.NamingContextFactory"); props.setProperty("java.naming.provider.url", "localhost:1099"); props.setProperty("java.naming.factory.url.pkgs", "org.jboss.naming"); InitialContext ctx = new InitialContext(props); try { PersonDAO persondao = (PersonDAO) ctx.lookup("PersonDAOBean/remote"); out.println("<br>============ 分页显示,每页记录数为2 =========<BR>"); String index = request.getParameter("index"); if (index==null || "".equals(index.trim())) index = "1"; int max = 2; //每页记录数为2 int whichpage = Integer.parseInt(index); //第几页 List list = persondao.getPersonList(max, whichpage); if (list!=null){ Iterator it = list.iterator(); while (it.hasNext()) { Person p = (Person)it.next(); out.println("人员编号:"+ p.getPersonid() + " 姓名:"+ p.getName() + "<Br>"); } } } catch (Exception e) { out.println(e.getMessage()); } %> w、调用存储过程(使用MySql数据库) 要调用存储过程,我们可以通过 EntityManager 对象的 createNativeQuery()方法执行 SQL 语句(注意:这里说的是 SQL语句,不是 EJB3 QL), 调用存储过程的 SQL格式如下: {call 存储过程名称(参数 1, 参数 2, … )} 在 EJB3 中你可以调用的存储过程有两种 1.无返回值的存储过程。 2.返回值为 ResultSet(以 select形式返回的值)的存储过程,EJB3 不能调用以 OUT 参数返回值的存储过程。 ① 调用无返回值的存储过程 CREATE PROCEDURE `AddPerson`() NOT DETERMINISTIC SQL SECURITY DEFINER COMMENT '' BEGIN INSERT into person(`PersonName`,`sex`,`age`) values('存储过程',1,25); END;     调用: private String QueryNoneReturnValueStoreProcedure(){ //调用无返回参数的存储过程 Query query = em.createNativeQuery("{call AddPerson()}"); query.executeUpdate(); ② 调用返回单值的存储过程 CREATE PROCEDURE `GetPersonName`(IN Pid INTEGER(11)) NOT DETERMINISTIC SQL SECURITY DEFINER COMMENT '' BEGIN select personname from person where `personid`=Pid; END;     调用: //调用返回单个值的存储过程 Query query = em.createNativeQuery("{call GetPersonName(?)}"); query.setParameter(1, new Integer(1)); String result = query.getSingleResult().toString(); ③ 调用返回表全部列的存储过程 CREATE PROCEDURE `GetPersonList`() NOT DETERMINISTIC SQL SECURITY DEFINER COMMENT '' BEGIN select * from person; END;     调用: //调用返回 Person 全部列的存储过程 Query query = em.createNativeQuery("{call GetPersonList()}", Person.class); List result = query.getResultList(); ④ 调用返回部分列的存储过程 CREATE PROCEDURE `GetPersonPartProperties`() NOT DETERMINISTIC SQL SECURITY DEFINER COMMENT '' BEGIN SELECT personid, personname from person; END;     调用: //调用返回部分列的存储过程 Query query = em.createNativeQuery("{call GetPersonPartProperties()}"); List result = query.getResultList(); StringBuffer out = new StringBuffer("*************** QueryPartColumnStoreProcedure 结果打印*********<BR>"); if (result!=null){ Iterator iterator = result.iterator(); while( iterator.hasNext() ){ //取每一行 Object[] row = ( Object[]) iterator.next(); //数组中的第一个值是 personid int personid = Integer.parseInt(row[0].toString()); String PersonName = row[1].toString(); out.append("人员 ID="+ personid+ "; 姓名="+PersonName+ "<BR>"); ⑦ 事务管理服务 当应用出现失败或异常时,它保证了数据库的完整性。你可以简单地将为一个POJO方法申明它的事务属性。 这样容器就可以在合适的上下文中运行这个方法。最常见的事务是定义在 session bean 的方法上,方法中 所有的数据库操作只有在方法正常退出时才会提交,如果方法抛出未捕获的异常,事务管理将回滚所有的变更。 @TransactionAttribute 注释用作定义一个需要事务的方法。它可以有以下参数: 1.REQUIRED:方法在一个事务中执行,如果调用的方法已经在一个事务中,则使用该事务,否则将创建一个新的事务。 2.MANDATORY:如果运行于事务中的客户调用了该方法,方法在客户的事务中执行。如果客户没有关联到     事务中, 容器就会抛出TransactionRequiredException。 如果企业 bean 方法必须用客户事务则采用 Mandatory属性。 3.REQUIRESNEW:方法将在一个新的事务中执行,如果调用的方法已经在一个事务中,则暂停旧的事务。在调用结束后恢复旧的事务。 4.SUPPORTS:如果方法在一个事务中被调用,则使用该事务,否则不使用事务。 5.NOT_SUPPORTED:如果方法在一个事务中被调用,容器会在调用之前中止该事务。在调用结束后,容器 会恢复客户事务。如果客户没有关联到一个事务中,容器不会在运行入该方法前启动一个新的事务。用 NotSupported 属性标识不需要事务的方法。因为事务会带来更高的性能支出,所以这个属性可以提高性能。 6.Never:如果在一个事务中调用该方法,容器会抛出 RemoteException。如果客户没有关联到一个事务中,         容器不会在运行入该方法前启动一个新的事务。 如果没有指定参数,@TransactionAttribute 注释使用 REQUIRED 作为默认参数。 @TransactionAttribute(TransactionAttributeType.REQUIRED) public void insertProduct(String name, Float price, boolean error) { ?????? ⑧ Entity的生命周期和状态 在 EJB3 中定义了四种 Entity的状态: 1. 新实体(new)。Entity由应用产生,和 EJB3 Persistence 运行环境没有联系,也没有唯一的标示符(Identity)。 2. 持久化实体(managed)。新实体和 EJB3 Persistence 运行环境产生关联 (通过 persist(), merge()等方法),在EJB3   Persistence 运行环境中存在和被管理,标志是在 EJB3 Persistence 运行环境中有一个唯一的标示(Identity)。 3. 分离的实体(detached)。Entity有唯一标示符,但它的标示符不被 EJB3 Persistence 运行环境管理, 同样的该   Entity也不被 EJB3 Persistence 运行环境管理。 4. 删除的实体(removed)。Entity被 remove()方法删除,对应的纪录将会在当前事务提交的时候从数据库中删除。 ----------------------看到173页-----------------------------------------------------明天继续---------------------------------------------------------   2.2 JBoss 数据源的配置(见图片) Jboss有一个默认的数据源DefaultDS,他使用Jboss内置的HSQLDB数据库。 实际应用中你可能使用不同的数据库,如MySql、MsSqlServer、Oracle 等。 各种数据库的数据源配置模版你可以在[Jboss 安装目录]\docs\examples\jca 目录中找到,默认名称为:数据库名+ -ds.xml。          不管你使用那种数据库都需要把他的驱动类Jar包放置在[Jboss安装目录]\server\default\lib目录下,放置后需要启动Jboss服务器。  五、Web 服务(Web Service)    1、Web Service的创建 开发一个JSR-181 POJO Endpoint的 Web Service 应遵守下面几个步骤: 1> 建立一个 POJO endpoint 2> 把 endpoint 定义成一个 servlet 3> 把 endpoint打包成一个 Web 应用(war 文件) 建立一个 POJO endpoint. package com.foshanshop.ws; import javax.jws.WebMethod; import javax.jws.WebService; import javax.jws.soap.SOAPBinding; @WebService(name = "HelloWorld", targetNamespace = "http://com.foshanshop.ws", serviceName = "HelloWorldService") @SOAPBinding(style = SOAPBinding.Style.RPC) public class HelloWorldService { @WebMethod public String SayHello(String name) { return name+ "说:这是我的第一个 web 服务"; } } @WebService 这个注释放置在 Java 类的前面,声明这个类的部分方法可以被发布为 Web 服务。 @WebService 的属性用于设置 Web 服务被发布时的一些配置信息,常用的属性说明如下: 1. name Web 服务的名字,WSDL中 wsdl:portType 元素的 name 属性和它保持一致,默认是 Java 类或者接口的名字。 2. serviceName Web 服务的服务名,WSDL 中 wsdl:service 元素的 name 属性和它保持一致,默认是Java 类的名字+”Service” 。 3. targetNamespace WSDL文件所使用的 namespace,该 Web 服务中所产生的其他 XML文档同样采用这个作为 namespace 。 @SOAPBinding()表示这个服务可以映射到一个 SOAP 消息中。 Style 用于指定SOAP 消息请求和回应的编码方式。 @WebMethod 这个注释放在需要被发布成 Web 服务的方法前面。 把 POJO endpoint 定义成一个 servlet. Web.xml <web-app xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"version="2.4"> <servlet> <servlet-name>HelloWorldService</servlet-name> <servlet-class>com.foshanshop.ws.HelloWorldService</servlet-class> </servlet> <servlet-mapping> <servlet-name>HelloWorldService</servlet-name> <url-pattern>/HelloWorldService/*</url-pattern> </servlet-mapping> </web-app> 把 endpoint打包成一个 web 应用(*.war),下面是 Ant配置文件 build.xml的片断: <target name="war" depends="compile" description="创建 WS 发布包"> <war warfile="${app.dir}/Services.war" webxml="${app.dir}/WEB-INF/web.xml"> <classes dir="${build.classes.dir}"> <include name="com/foshanshop/ws/HelloWorldService.class" /> </classes> </war> </target> 六、使用 EJB3.0 构建轻量级应用框架 1、在 WEB中使用 EJB3.0框架 0 0 猜你在找 【直播】机器学习&数据挖掘7周实训--韦玮 【套餐】系统集成项目管理工程师顺利通关--徐朋 【直播】3小时掌握Docker最佳实战-徐西宁 【套餐】机器学习系列套餐(算法+实战)--唐宇迪 【直播】计算机视觉原理及实战--屈教授 【套餐】微信订阅号+服务号Java版 v2.0--翟东平 【直播】机器学习之矩阵--黄博士 【套餐】微信订阅号+服务号Java版 v2.0--翟东平 【直播】机器学习之凸优化--马博士 【套餐】Javascript 设计模式实战--曾亮 查看评论 * 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场 个人资料 • 访问:6175次 • 积分:199 • 等级: • 排名:千里之外 • 原创:14篇 • 转载:1篇 • 译文:0篇 • 评论:1条 文章分类 文章存档 最新评论
__label__pos
0.940049
Adobe Analytics If you are dealing with large data sets and trying to make sense of them, you need powerful tools that can help you analyze and visualize the data. Power BI is one such tool that helps you turn your data into actionable insights. However, to make the most of Power BI, you need to connect it to various data sources. Adobe Analytics is one such data source that is widely used by many organizations. In this article, we will discuss how you can use Power Query M Language Code to connect to Adobe Analytics from inside Power BI. What is Adobe Analytics? Adobe Analytics Adobe Analytics is a web analytics service that helps organizations track and analyze website traffic and user behavior. It provides a comprehensive set of features that enable you to measure and optimize your digital marketing efforts. With Adobe Analytics, you can track a wide range of metrics, such as page views, bounce rates, conversion rates, and more. Why Connect Power BI to Adobe Analytics? Power BI is a powerful data visualization tool that provides a wide range of features for analyzing and visualizing data. By connecting Power BI to Adobe Analytics, you can gain deeper insights into your web traffic and user behavior. You can create interactive reports and dashboards that help you identify trends, patterns, and opportunities. Moreover, you can automate data refreshes and keep your reports up-to-date. Using Power Query M Language Code to Connect to Adobe Analytics Power Query is a data connection and transformation tool that is built into Power BI. It provides a powerful, yet easy-to-use interface for connecting to various data sources, including Adobe Analytics. Power Query uses a language called M to perform data transformations. Here are the steps to connect to Adobe Analytics using Power Query M Language Code: 1. Open Power BI and click on “Get Data” in the Home tab. 2. In the Get Data window, select “Adobe Analytics” from the list of available data sources. 3. In the Adobe Analytics window, enter your Adobe Analytics credentials and click on “Connect.” 4. Once connected, you can select the data you want to import. You can either select a report suite or create a custom report. 5. After selecting the data, click on “Load” to import the data into Power BI. 6. In the Query Editor window, you can see the M Language code generated by Power Query. This code can be edited to perform data transformations. 7. You can use the M Language code to filter, transform, and shape your data according to your requirements. Power Query provides a wide range of functions that can be used to perform complex data transformations. 8. Once you have transformed your data, you can load it into Power BI and create reports and dashboards. Conclusion Connecting Power BI to Adobe Analytics using Power Query M Language Code can help you gain deeper insights into your web traffic and user behavior. By automating data refreshes and creating interactive reports and dashboards, you can make data-driven decisions and optimize your digital marketing efforts. Power Query provides a powerful, yet easy-to-use interface for connecting to various data sources and performing data transformations. With Power BI and Adobe Analytics, you can turn your data into actionable insights. Power BI Training Courses by G Com Solutions (0800 998 9248) Contact Us Subject Your Name (required) Company/Organisation Email (required) Telephone Training Course(s) Your Message Upload Example Document(s) (Zip multiple files) Similar Posts
__label__pos
0.991406
Join us in San Francisco on the 29/30th of October for two days of developer workshops and technical talks Sending an SMS To send an SMS, replace the following variables in the example below: Key Description TO_NUMBER The number you are sending the SMS to in E.164 format. For example 447700900000. NEXMO_API_KEY You can find this in your account overview NEXMO_API_SECRET You can find this in your account overview Write the code Add the following to send-sms.sh: Copy to Clipboard curl -X "POST" "https://rest.nexmo.com/sms/json" \ -d "from=AcmeInc" \ -d "text=A text message sent using the Nexmo SMS API" \ -d "to=$TO_NUMBER" \ -d "api_key=$NEXMO_API_KEY" \ -d "api_secret=$NEXMO_API_SECRET" View full source Run your code Save this file to your machine and run it: sh send-sms.sh Prerequisites Install dependencies npm install nexmo Initialize your dependencies Create a file named send.js and add the following code: Copy to Clipboard const Nexmo = require('nexmo') const nexmo = new Nexmo({ apiKey: NEXMO_API_KEY, apiSecret: NEXMO_API_SECRET }) View full source Write the code Add the following to send.js: Copy to Clipboard const from = FROM_NUMBER const to = TO_NUMBER const text = 'A text message sent using the Nexmo SMS API' nexmo.message.sendSms(from, to, text, (err, responseData) => { if (err) { console.log(err); } else { if(responseData.messages[0]['status'] === "0") { console.log("Message sent successfully."); } else { console.log(`Message failed with error: ${responseData.messages[0]['error-text']}`); } } }) View full source Run your code Save this file to your machine and run it: node send.js Prerequisites Install dependencies Add the following to build.gradle: compile 'com.nexmo:client:5.1.0' Initialize your dependencies Create a class named SendMessage and add the following code to the main method: Copy to Clipboard NexmoClient client = NexmoClient.builder().apiKey(NEXMO_API_KEY).apiSecret(NEXMO_API_SECRET).build(); View full source Write the code Add the following to the main method of the SendMessage class: Copy to Clipboard TextMessage message = new TextMessage(NEXMO_BRAND_NAME, TO_NUMBER, "A text message sent using the Nexmo SMS API" ); SmsSubmissionResponse response = client.getSmsClient().submitMessage(message); if (response.getMessages().get(0).getStatus() == MessageStatus.OK) { System.out.println("Message sent successfully."); } else { System.out.println("Message failed with error: " + response.getMessages().get(0).getErrorText()); } View full source Run your code We can use the application plugin for Gradle to simplify the running of our application. Update your build.gradle with the following: apply plugin: 'application' mainClassName = project.hasProperty('main') ? project.getProperty('main') : '' Run the following gradle command to execute your application, replacing com.nexmo.quickstart.sms with the package containing SendMessage: gradle run -Pmain=com.nexmo.quickstart.sms.SendMessage Prerequisites Install dependencies Install-Package Nexmo.Csharp.Client Initialize your dependencies Create a file named SMSController.cs and add the following code: Copy to Clipboard var client = new Client(creds: new Nexmo.Api.Request.Credentials { ApiKey = "NEXMO_API_KEY", ApiSecret = "NEXMO_API_SECRET" }); View full source Write the code Add the following to SMSController.cs: Copy to Clipboard var results = Client.SMS.Send(request: new SMS.SMSRequest { from = "Acme Inc", to = TO_NUMBER, text = "A test SMS sent using the Nexmo SMS API" }); View full source Prerequisites Install dependencies composer require nexmo/client Initialize your dependencies Create a file named send-sms.php and add the following code: Copy to Clipboard $basic = new \Nexmo\Client\Credentials\Basic(NEXMO_API_KEY, NEXMO_API_SECRET); $client = new \Nexmo\Client($basic); View full source Write the code Add the following to send-sms.php: Copy to Clipboard try { $message = $client->message()->send([ 'to' => TO_NUMBER, 'from' => 'Acme Inc', 'text' => 'A text message sent using the Nexmo SMS API' ]); $response = $message->getResponseData(); if($response['messages'][0]['status'] == 0) { echo "The message was sent successfully\n"; } else { echo "The message failed with status: " . $response['messages'][0]['status'] . "\n"; } } catch (Exception $e) { echo "The message was not sent. Error: " . $e->getMessage() . "\n"; } View full source Run your code Save this file to your machine and run it: php send-sms.php Prerequisites Install dependencies pip install nexmo Initialize your dependencies Create a file named send-an-sms.py and add the following code: Copy to Clipboard import nexmo client = nexmo.Client(key=NEXMO_API_KEY, secret=NEXMO_API_SECRET) View full source Write the code Add the following to send-an-sms.py: Copy to Clipboard responseData = client.send_message( { "from": "Acme Inc", "to": TO_NUMBER, "text": "A text message sent using the Nexmo SMS API", } ) if responseData["messages"][0]["status"] == "0": print("Message sent successfully.") else: print(f"Message failed with error: {responseData['messages'][0]['error-text']}") View full source Run your code Save this file to your machine and run it: python send-an-sms.py Prerequisites Install dependencies gem install nexmo Initialize your dependencies Create a file named send.rb and add the following code: Copy to Clipboard client = Nexmo::Client.new( api_key: NEXMO_API_KEY, api_secret: NEXMO_API_SECRET ) View full source Write the code Add the following to send.rb: Copy to Clipboard client.sms.send( from: 'Acme Inc', to: TO_NUMBER, text: 'A text message sent using the Nexmo SMS API' ) View full source Run your code Save this file to your machine and run it: ruby send.rb Try it out When you run the example above, the text message will be sent to the mobile number that you specified. Further reading
__label__pos
0.942472
Introduction# Note: This guide is written for an interactive environment such as Jupyter notebooks. The interactive widgets will not work in a static version of this documentation. Instructions for installing Panel and the example notebooks can be found in the Installation Guide Panel lets you add interactive controls for just about anything you can display in Python. Panel can help you build simple interactive apps, complex multi-page dashboards, or anything in between. As a simple example, let’s say we have loaded the UCI ML dataset measuring the environment in a meeting room: import pandas as pd; import numpy as np; import matplotlib.pyplot as plt data = pd.read_csv('https://raw.githubusercontent.com/holoviz/panel/master/examples/assets/occupancy.csv') data['date'] = data.date.astype('datetime64[ns]') data = data.set_index('date') data.tail() Temperature Humidity Light CO2 HumidityRatio Occupancy date 2015-02-10 09:29:00 21.05 36.0975 433.0 787.250000 0.005579 1 2015-02-10 09:29:59 21.05 35.9950 433.0 789.500000 0.005563 1 2015-02-10 09:30:59 21.10 36.0950 433.0 798.500000 0.005596 1 2015-02-10 09:32:00 21.10 36.2600 433.0 820.333333 0.005621 1 2015-02-10 09:33:00 21.10 36.2000 447.0 821.000000 0.005612 1 And we’ve written some code that smooths a time series and plots it using Matplotlib with outliers highlighted: from matplotlib.figure import Figure from matplotlib.backends.backend_agg import FigureCanvas %matplotlib inline def mpl_plot(avg, highlight): fig = Figure() FigureCanvas(fig) # not needed in mpl >= 3.1 ax = fig.add_subplot() avg.plot(ax=ax) if len(highlight): highlight.plot(style='o', ax=ax) return fig def find_outliers(variable='Temperature', window=30, sigma=10, view_fn=mpl_plot): avg = data[variable].rolling(window=window).mean() residual = data[variable] - avg std = residual.rolling(window=window).std() outliers = (np.abs(residual) > std * sigma) return view_fn(avg, avg[outliers]) We can call the function with parameters and get a plot: find_outliers(variable='Temperature', window=20, sigma=10) ../_images/Introduction_5_0.png It works! But exploring all these parameters by typing Python is slow and tedious. Plus we want our boss, or the boss’s boss, to be able to try it out. If we wanted to try out lots of combinations of these values to understand how the window and sigma affect the plot, we could reevaluate the above cell lots of times, but that would be a slow and painful process, and is only really appropriate for users who are comfortable with editing Python code. In the next few examples we will demonstrate how to use Panel to quickly add some interactive controls to some object and make a simple app. To see an overview of the different APIs Panel offers see the API user guide and for a quick reference for various Panel functionality see the overview. Interactive Panels# Instead of editing code, it’s much quicker and more straightforward to use sliders to adjust the values interactively. You can easily make a Panel app to explore a function’s parameters using pn.interact, which is similar to the ipywidgets interact function: import panel as pn pn.extension() pn.interact(find_outliers) As long as you have a live Python process running, dragging these widgets will trigger a call to the find_outliers callback function, evaluating it for whatever combination of parameter values you select and displaying the results. A Panel like this makes it very easy to explore any function that produces a visual result of a supported type, such as Matplotlib (as above), Bokeh, Plotly, Altair, or various text and image types. Components of Panels# interact is convenient, but what if you want more control over how it looks or works? First, let’s see what interact actually creates, by grabbing that object and displaying its representation: kw = dict(window=(1, 60), variable=sorted(list(data.columns)), sigma=(1, 20)) i = pn.interact(find_outliers, **kw) i.pprint() Column [0] Column [0] Select(name='variable', options=['CO2', 'Humidity', ...], value='Temperature') [1] IntSlider(end=60, name='window', start=1, value=30) [2] IntSlider(end=20, name='sigma', start=1, value=10) [1] Row [0] Matplotlib(Figure, name='interactive02986') As you can see, the interact call created a pn.Column object consisting of a WidgetBox (with 3 widgets) and a pn.Row with one Matplotlib figure object. Panel is compositional, so you can mix and match these components any way you like, adding other objects as needed: text = "<br>\n# Room Occupancy\nSelect the variable, and the time window for smoothing" p = pn.Row(i[1][0], pn.Column(text, i[0][0], i[0][1])) p Note that the widgets stay linked to their plot even if they are in a different notebook cell: i[0][2] Also note that Panel widgets are reactive, so they will update even if you set the values by hand: i[0][2].value = 5 Composing new Panels# You can use this compositional approach to combine different components such as widgets, plots, text, and other elements needed for an app or dashboard in arbitrary ways. The interact example builds on a reactive programming model, where an input to the function changes and Panel reactively updates the output of the function. interact is a convenient way to create widgets from the arguments to your function automatically, but Panel also provides a more explicit reactive API letting you specifically define connections between widgets and function arguments, and then lets you compose the resulting dashboard manually from scratch. In the example below we explicitly declare each of the components of an app: widgets, a function to return the plot, column and row containers, and the completed occupancy Panel app. Widget objects have multiple “parameters” (current value, allowed ranges, and so on), and here we will use Panel’s bind function to declare that function’s input values should come from the widgets’ value parameters. Now when the function and the widgets are displayed, Panel will automatically update the displayed output whenever any of the inputs change: import panel.widgets as pnw variable = pnw.RadioButtonGroup(name='variable', value='Temperature', options=list(data.columns)) window = pnw.IntSlider(name='window', value=10, start=1, end=60) reactive_outliers = pn.bind(find_outliers, variable, window, 10) widgets = pn.Column("<br>\n# Room occupancy", variable, window) occupancy = pn.Row(reactive_outliers, widgets) occupancy Deploying Panels# The above panels all work in the notebook cell (if you have a live Jupyter kernel running), but unlike other approaches such as ipywidgets, Panel apps work just the same in a standalone server. For instance, the app above can be launched as its own web server on your machine by uncommenting and running the following cell: #occupancy.show() Or, you can simply mark whatever you want to be in the separate web page with .servable(), and then run the shell command panel serve --show Introduction.ipynb to launch a server containing that object. (Here, we’ve also added a semicolon to avoid getting another copy of the occupancy app here in the notebook.) occupancy.servable(); During development, particularly when working with a raw script using panel serve --show --autoreload can be very useful as the application will automatically update whenever the script or notebook or any of its imports change. Declarative Panels# The above compositional approach is very flexible, but it ties your domain-specific code (the parts about sine waves) with your widget display code. That’s fine for small, quick projects or projects dominated by visualization code, but what about large-scale, long-lived projects, where the code is used in many different contexts over time, such as in large batch runs, one-off command-line usage, notebooks, and deployed dashboards? For larger projects like that, it’s important to be able to separate the parts of the code that are about the underlying domain (i.e. application or research area) from those that are tied to specific display technologies (such as Jupyter notebooks or web servers). For such usages, Panel supports objects declared with the separate Param library, which provides a GUI-independent way of capturing and declaring the parameters of your objects (and dependencies between your code and those parameters), in a way that’s independent of any particular application or dashboard technology. For instance, the above code can be captured in an object that declares the ranges and values of all parameters, as well as how to generate the plot, independently of the Panel library or any other way of interacting with the object: import param class RoomOccupancy(param.Parameterized): variable = param.Selector(objects=list(data.columns)) window = param.Integer(default=10, bounds=(1, 20)) sigma = param.Number(default=10, bounds=(0, 20)) def view(self): return find_outliers(self.variable, self.window, self.sigma) obj = RoomOccupancy() obj RoomOccupancy(name='RoomOccupancy03019', sigma=10, variable='Temperature', window=10) The RoomOccupancy class and the obj instance have no dependency on Panel, Jupyter, or any other GUI or web toolkit; they simply declare facts about a certain domain (such as that smoothing requires window and sigma parameters, and that window is an integer greater than 0 and sigma is a positive real number). This information is then enough for Panel to create an editable and viewable representation for this object without having to specify anything that depends on the domain-specific details encapsulated in obj: pn.Row(obj.param, obj.view) To support a particular domain, you can create hierarchies of such classes encapsulating all the parameters and functionality you need across different families of objects, with both parameters and code inheriting across the classes as appropriate, all without any dependency on a particular GUI library or even the presence of a GUI at all. This approach makes it practical to maintain a large codebase, all fully displayable and editable with Panel, in a way that can be maintained and adapted over time. Linking plots and actions between panes# The above approaches each work with a very wide variety of displayable objects, including images, equations, tables, and plots. In each case, Panel provides interactive functionality using widgets and updates the displayed objects accordingly, while making very few assumptions about what actually is being displayed. Panel also supports richer, more dynamic interactivity where the displayed object is itself interactive, such as the JavaScript-based plots from Bokeh and Plotly. For instance, if we substitute the Bokeh wrapper hvPlot for the Matplotlib wrapper provided with Pandas, we automatically get interactive plots that allow zooming, panning and hovering: import hvplot.pandas def hvplot(avg, highlight): return avg.hvplot(height=200) * highlight.hvplot.scatter(color='orange', padding=0.1) text2 = "## Room Occupancy\nSelect the variable and the smoothing values" hvp = pn.interact(find_outliers, view_fn=hvplot, **kw) pn.Column(pn.Row(pn.panel(text2, width=400), hvp[0]), hvp[1]).servable("Occupancy") These interactive actions can be combined with more complex interactions with a plot (e.g. tap, hover) to make it easy to explore data more deeply and uncover connections. For instance, we can use HoloViews to make a more full-featured version of the hvPlot example that displays a table of the current measurement values at the hover position on the plot: import holoviews as hv tap = hv.streams.PointerX(x=data.index.min()) def hvplot2(avg, highlight): line = avg.hvplot(height=300, width=500) outliers = highlight.hvplot.scatter(color='orange', padding=0.1) tap.source = line return (line * outliers).opts(legend_position='top_right') @pn.depends(tap.param.x) def table(x): index = np.abs((data.index - x).astype(int)).argmin() return data.iloc[index] app = pn.interact(find_outliers, view_fn=hvplot2, **kw) pn.Row( pn.Column("## Room Occupancy\nHover over the plot for more information.", app[0]), pn.Row(app[1], table) ) Exploring further# For a quick reference of different Panel functionality refer to the overview. If you want a more detailed description of different ways of using Panel, each appropriate for different applications see the following materials: • APIs: An overview of the different APIs offered by Panel. • Interact: Instant GUI, given a function with arguments • Widgets: Explicitly instantiating widgets and linking them to actions • Parameters: Capturing parameters and their links to actions declaratively Just pick the style that seems most appropriate for the task you want to do, then study that section of the user guide. Regardless of which approach you take, you’ll want to learn more about Panel’s panes and layouts: • Components: An overview of the core components of Panel including Panes, Widgets and Layouts • Customization: How to set styles and sizes of Panel components • Deploy & Export: An overview on how to display, export and deploy Panel apps and dashboards Finally, if you are building a complex multi-stage application, you can consider our support for organizing workflows consisting of multiple stages: • Pipelines: Making multi-stage processing pipelines in notebooks and as deployed apps Or for more polished apps you can make use of Templates to achieve exactly the look and feel you want: • Templates: Composing one or more Panel objects into jinja2 template with full control over layout and styling. This web page was generated from a Jupyter notebook and not all interactivity will work on this website. Right click to download and run locally for full Python-backed interactivity. Right click to download this notebook from GitHub.
__label__pos
0.801258
When you change your subscriptions and databases you also change the cost of your deployment. With a dry-run request, you can evaluate the impact that subscription and databases changes cause before you deploy these changes: • Create subscription • Create a database • Update a database Defining a dry-run request API operations that support dry-run requests accept the dryRun boolean parameter in the JSON request body. For example, the JSON body of a create subscription request can include the dryRun=true parameter: { "name": "Basic subscription example", "dryRun": true, "paymentMethodId": 8240, "cloudProviders": [ { "cloudAccountId": 9838, "regions": [ { "region": "us-east-1", "networking": { "deploymentCIDR": "10.0.0.0/24" } } ] } ], "databases": [ { "name": "Redis-database-example", "memoryLimitInGb": 1.1 } ] } Executing a dry-run request Dry-run requests behave like regular request except that no changes are made to existing resources. A dry-run request produces a cost evaluation report for the subscription. API Operation dryRun=false (default) dryRun=true Create subscription Create a subscription Returns a cost evaluation report of the planned subscription Create database Creates a new database in the subscription Returns a cost evaluation report for the relevant subscription Update database Changes the specified database Returns a cost evaluation report and evaluates whether the relevant subscription requires additional resources base on the database modification Example of a dry-run response Here is an example of the pricing response section of a dry-run request: "response": { "resource": { "pricing": [ { "type": "Shards", "quantity": 2, "quantityMeasurement": "shards", "pricePerUnit": 0.308, "priceCurrency": "USD", "pricePeriod": "hour" }, { "type": "EBS Volume", "quantity": 71, "quantityMeasurement": "GB" }, { "type": "c5.xlarge", "quantity": 2, "quantityMeasurement": "instances" }, { "type": "m5.large", "quantity": 1, "quantityMeasurement": "instances" } ] } } The structure of the pricing response depends on the cloud account used by the request: • For a customer provided cloud account - The response includes pricing data for the shards, and a lists the resources required (storage and compute instances) without pricing data • For a Redis Labs internal cloud account (cloudAccountId = 1) - The response includes pricing data for both shards and the resources required (storage and compute instances)
__label__pos
0.999975
Take the 2-minute tour × Game Development Stack Exchange is a question and answer site for professional and independent game developers. It's 100% free, no registration required. I want be able to move my texture in GLSL I have set my texture to wrap S and wrap T but not sure why it wont move my fragment shader looks like this at the moment uniform sampler2D n_mapTex; uniform sampler2D n_mapTex2; varying mediump vec2 TexCoord; varying mediump vec2 TexCoord2; //This gets updated within the main code uniform mediump float vTime; void main() { gl_FragColor = texture2D(n_mapTex, vec2(TexCoord.x + vTime, TexCoord.y + vTime)); } within my code I have a made a function to calculate FPS and I use the delta time of that function to pass into the fragment shader void OGLESIntroducingPVRTools::timer(){ /*This method records to the time to start rendering a frame and hold that value in another variable we then start to count again. To get delta time we subtact the current time from the previous time this will give us in milliseconds how long it took for a frame to render. We then divide this by 1000 this converts it from ms per frame to FPS*/ FrameCount++; p_Time = c_Time; c_Time = PVRShellGetTime(); elapsed = c_Time / 1000.0f; DT = ((float)(c_Time - p_Time)) / 1000.0f; fCount += DT; if(fCount >= 1.0f) //if time is over 1 second reset counters and recount { FPS = FrameCount; FrameCount = 0; fCount = 0; } } and in my renderscene method which updates I have this code to pass the value to the uniform within the fragment shader glUniform1i(glGetUniformLocation(m_ShaderProgram.uiId, "vTime"), DT); share|improve this question 1 Answer 1 up vote 1 down vote accepted vTime is a float, but you're passing it as an int, since you're using glUniform1i(). Adding integer values to your texcoords has no effect. Use glUniform1f() instead. share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.612184
pntnode100.dll Process name: CorelDRAW(R) Application using this process: CorelDRAW(R) pntnode100.dll Process name: CorelDRAW(R) Application using this process: CorelDRAW(R) pntnode100.dll Click here to run a scan if you are experiencing issues with this process. Process name: CorelDRAW(R) Application using this process: CorelDRAW(R) Recommended: Scan your system for invalid registry entries. What is pntnode100.dll doing on my computer? Node Tool Library This process is still being reviewed. If you have some information about it feel free to send us an email at pl[at]uniblue[dot]com Non-system processes like pntnode100.dll originate from software you installed on your system. Since most applications store data in your system's registry, it is likely that over time your registry suffers fragmentation and accumulates invalid entries which can affect your PC's performance. It is recommended that you check your registry to identify slowdown issues. pntnode100.dll In order to ensure your files and data are not lost, be sure to back up your files online. Using a cloud backup service will allow you to safely secure all your digital files. This will also enable you to access any of your files, at any time, on any device. Is pntnode100.dll harmful? pntnode100.dll is unrated Can I stop or remove pntnode100.dll? Most non-system processes that are running can be stopped because they are not involved in running your operating system. Scan your system now to identify unused processes that are using up valuable resources. pntnode100.dll is used by 'CorelDRAW(R)'.This is an application created by 'Unknown'. To stop pntnode100.dll permanently uninstall 'CorelDRAW(R)' from your system. Uninstalling applications can leave invalid registry entries, accumulating over time. Run a free scan to find out how to optimize software and system performance. Is pntnode100.dll CPU intensive? This process is not considered CPU intensive. However, running too many processes on your system may affect your PC’s performance. To reduce system overload, you can use the Microsoft System Configuration Utility to manually find and disable processes that launch upon start-up. Alternatively, download PC Mechanic to automatically scan and identify any PC issues. Why is pntnode100.dll giving me errors? Process related issues are usually related to problems encountered by the application that runs it. A safe way to stop these errors is to uninstall the application and run a system scan to automatically identify any PC issues. Process Library is the unique and indispensable process listing database since 2004 Now counting 140,000 processes and 55,000 DLLs. Join and subscribe now! System Tools SpeedUpMyPC PC Mechanic Toolbox ProcessQuicklink
__label__pos
0.972672
Category Archives: twig Sending automated emails with PHP, Swiftmailer and Twig I’m the one of hosts of a Coding Dojo in my city called Katayunos. Katayunos is the mix of the word Kata (coding kata) and “Desayuno” (breakfast in Spanish). A group of brave programmers meet together one Saturday morning and after having breakfast we pick one coding kata and we practise TDD and pair programming. It’s something difficult to explain to non-geek people (why the hell we wake up early one Saturday morning to do this) but if you are reading this post probably it sounds good:). My work as host is basically pick the place and encourage people to join to the Coding Dojo. One way of doing this (besides twitter buzz) is take my address book and send one bulk email to all of them inviting to join us. I don’t like this kind of mails. They look like spam, so I prefer to send a personalized email. This email has a common part (the place location, the hour, the event description, …) and the personalized part. I can do it manually, the list isn’t so huge, but definitely that’s not cool. Because of that I have done a little script to perform this operation. I can do a simple PHP script but we are speaking about announcing a event about TDD, SOLID and things like that, so I must use the “right way”. Let’s start. I manage my list of contacts within a spreadsheet. In this spreadsheet I have the name, the email and a one paragraph with the personalized part to each one of my contact. I can easily export this spreadsheet to a csv document like this: Peter Parker, [email protected], "Lorem ipsum dolor sit amet, ..." Clark Kent, [email protected], "consectetur adipisicing elit, ..." Juan López Fernández, [email protected], "sed do eiusmod tempor incididunt .." So first of all I need to parse this file. class Parser { private $data; public function createFromCsvFile($path) { $handle = fopen($path, "r"); while (($data = fgetcsv($handle)) !== false) { $this->data[] = [ 'name' => trim($data[0]), 'email' => trim($data[1]), 'body' => isset($data[2]) ? trim($data[2]) : null, ]; } } public function getData() { return $this->data; } } Easy. Now I want to send this parsed array by email. Because of that I will include Swiftmailer in my composer.json file. My email will also be one template and one personalized part. We will use Twig to manage the template. "require": { "swiftmailer/swiftmailer": "v5.0.2", "twig/twig": "v1.13.2", } Now we will create a class to wrap the needed code to send emails class Mailer { private $swiftMailer; private $swiftMessage; function __construct(Swift_Mailer $swiftMailer, Swift_Message $swiftMessage) { $this->swiftMailer = $swiftMailer; $this->swiftMessage = $swiftMessage; } public function sendMessage($to, $body) { $this->swiftMessage->setTo($to); $this->swiftMessage->setBody(strip_tags($body)); $this->swiftMessage->addPart($body, 'text/html'); return $this->swiftMailer->send($this->swiftMessage); } } Our Mailer class sends mails. Our Parser class parses one csv file. Now we need something to join those two classes: the Spammer class. Spammer class will take one parsed array and it will send one by one the mails using Mailer class. class Spammer { private $twig; private $mailer; function __construct(Twig_Environment $twig, Mailer $mailer) { $this->twig = $twig; $this->mailer = $mailer; } public function sendEmails($data) { foreach ($data as $item) { $to = $item['email']; $this->mailer->sendMessage($to, $this->twig->render('mail.twig', $item)); } } } Ok with this three classes I can easily send my emails. This script is a console script and we also want pretty console colours and this kind of stuff. symfony/console to the rescue. But I’ve a problem now. I want to write one message when one mail is sent and another one when something wrong happens. If I want to do that I need to change my Spammer class. But my Spammer class does’t know anything about my console Command. If I inject the console command into my Spammer class I will violate the Demeter law, and that’s a sin. What can we do? Easy: The mediator pattern. We can write one implementation of mediator pattern but we also can use symfony/event-dispatcher, a well done implementation of this pattern. We change our Spammer class to: use Symfony\Component\EventDispatcher\EventDispatcher; class Spammer { private $twig; private $mailer; private $dispatcher; function __construct(Twig_Environment $twig, Mailer $mailer, EventDispatcher $dispatcher) { $this->twig = $twig; $this->mailer = $mailer; $this->dispatcher = $dispatcher; } public function sendEmails($data) { foreach ($data as $item) { $to = $item['email']; try { $this->mailer->sendMessage($to, $this->twig->render('mail.twig', $item)); $this->dispatcher->dispatch(MailEvent::EVENT_MAIL_SENT, new MailEvent\Sent($to)); } catch (\Exception $e) { $this->dispatcher->dispatch(MailEvent::EVENT_SENT_ERROR, new MailEvent\Error($to, $e)); } } } } Now can easily build of console command class: use Symfony\Component\Console\Command\Command; use Symfony\Component\Console\Input\InputArgument; use Symfony\Component\Console\Input\InputInterface; use Symfony\Component\Console\Input\InputOption; use Symfony\Component\Console\Output\OutputInterface; use Symfony\Component\EventDispatcher\EventDispatcher; class SpamCommand extends Command { private $parser; private $dispatcher; protected function configure() { $this->setName('spam:run') ->setDescription('Send Emails'); } protected function execute(InputInterface $input, OutputInterface $output) { $output->writeln("Sending mails ..."); $this->dispatcher->addListener(MailEvent::EVENT_MAIL_SENT, function (MailEvent\Sent $event) use ($output) { $output->writeln("<info>Mail sent to</info>: <fg=black;bg=cyan>{$event->getTo()}</fg=black;bg=cyan>"); } ); $this->dispatcher->addListener(MailEvent::EVENT_SENT_ERROR, function (MailEvent\Error $event) use ($output) { $output->writeln("<error>Error sending mail to</error>: <fg=black;bg=cyan>{$event->getTo()}</fg=black;bg=cyan> Error: " . $event->getException()->getMessage()); } ); $this->spammer->sendEmails($this->parser->getData()); $output->writeln("End"); } public function setSpammer(Spammer $spammer) { $this->spammer = $spammer; } public function setParser(Parser $parser) { $this->parser = $parser; } public function setDispatcher(EventDispatcher $dispatcher) { $this->dispatcher = $dispatcher; } } With all this parts we can build our script. Our classes are decoupled. That’s good but setting up the dependencies properly can be hard. Because of that we will use symfony/dependency-injection. With symfony DIC we can set up our dependency tree within a yaml file: Our main services.yml imports: - resource: conf.yml - resource: mail.yml - resource: twig.yml parameters: base.path: . services: parser: class: Parser calls: - [createFromCsvFile, [%mail.list%]] mailer: class: Mailer arguments: [@swift.mailer, @swift.message] spam.command: class: SpamCommand calls: - [setParser, [@parser]] - [setDispatcher, [@dispatcher]] - [setSpammer, [@spammer]] spammer: class: Spammer arguments: [@twig, @mailer, @dispatcher] dispatcher: class: Symfony\Component\EventDispatcher\EventDispatcher I like to separate the configuration files to reuse those files between projects and to make them more readable. One for twig: parameters: twig.path: %base.path%/templates twig.conf: auto_reload: true services: twigLoader: class: Twig_Loader_Filesystem arguments: [%twig.path%] twig: class: Twig_Environment arguments: [@twigLoader, %twig.conf%] another one for swiftmailer: services: swift.message: class: Swift_Message calls: - [setSubject, [%mail.subject%]] - [setFrom, [%mail.from.mail%: %mail.from.name%]] swift.transport: class: Swift_SmtpTransport arguments: [%mail.smtp.host%, %mail.smtp.port%, %mail.smtp.encryption%] calls: - [setUsername, [%mail.smtp.username%]] - [setPassword, [%mail.smtp.password%]] swift.mailer: class: Swift_Mailer arguments: [@swift.transport] and the last one for the configuration parameters: parameters: mail.do.not.send.mails: false mail.list: %base.path%/mailList.csv mail.subject: mail subject mail.from.name: My Name mail.from.mail: [email protected] mail.smtp.username: my_smtp_username mail.smtp.password: my_smtp_password mail.smtp.host: smtp.gmail.com mail.smtp.port: 465 mail.smtp.encryption: ssl Now we can build our script. use Symfony\Component\Console\Application; use Symfony\Component\DependencyInjection\ContainerBuilder; use Symfony\Component\Config\FileLocator; use Symfony\Component\DependencyInjection\Loader\YamlFileLoader; $container = new ContainerBuilder(); $loader = new YamlFileLoader($container, new FileLocator(__DIR__ . '/conf')); $loader->load('services.yml'); $container->setParameter('base.path', __DIR__); $application = new Application(); $application->add($container->get('spam.command')); $application->run(); n And that’s all. My colleagues of the next Katayuno will be invited in a “SOLID” way :). Source code is available in my github account. BTW: Do you want to organize one Katayuno in your city? It’s very easy. Feel free to contact me for further information. PHP Template Engine Comparison I’m going to face a project using a template engine with PHP. Because of that I’m will perform a small benchmark test of several PHP template engines. That’s not an exhaustive performance test. It’s only my personal test. Template engines has a lot of features but I normally only use a few of them and the other features very seldom. In this performance test I will check the same features under different template engines to see the syntax differences and the performance. The template engines selected for the test are Smarty, Twig and Haanga. Let’s start: Smarty. v3.0.6 It’s probably the most famous template engine. It’s a mature project. For years it was “the” template engine and the others were the “alternatives”. It was famous because of the speed. Twig. v1.0.0-RC1-8 It’s a new template engine developed by Fabien Potencier, the creator of the symfony framework. One of the PHP’s rock stars nowadays. It’s going to be an important part of the new symfony 2.0 framework. Twig borrows the template syntax from Django (probably the main web framework if we work with Python) Haanga. v1.0.4-14 It’s another new template engine using the Django style. It was developed for Menéame by César Rodas. I’ve decided to create two tests. One with a simple template and another using template Inheritance. The both cases renders one html page using one variable, filters and for loop to create an HTML table. Basically I’ve created those test because they’re the things I normally use. I will run the test with an HTML table of 50 rows and 1000 rows Simple template Smarty {* Smarty. indexFull.tpl*} <html> <head> <title>{$title}</title> </head> <body> <h2>An example with {$title|capitalize}</h2> <b>Table with {$number|escape} rows</b> <table> {foreach $table as $row} <tr bgcolor="{cycle values="#aaaaaa,#ffffff"}"> <td>{$row.id}</td> <td>{$row.name}</td> </tr> {foreachelse} <tr><td>No items were found</td></tr> {/foreach} </table> </body> </html> And the PHP conde: // index.php $time = microtime(TRUE); $mem = memory_get_usage(); define('BASE_DIR', dirname(__file__)); require(BASE_DIR . '/include/smarty/Smarty.class.php'); $smarty = new Smarty(); $smarty->setTemplateDir(BASE_DIR . '/smarty/templates'); $smarty->setCompileDir(BASE_DIR . '/smarty/templates_c'); $smarty->setCacheDir(BASE_DIR . '/smarty/cache'); $smarty->setConfigDir(BASE_DIR .'/smarty/configs'); $smarty->assign('title', "smarty"); $rows = 1000; $data = array(); for ($i=0; $i<$rows; $i++ ) { $data[] = array('id' => $i, 'name' => "name {$i}"); } $smarty->assign('table', $data); $smarty->assign('number', $rows); $smarty->display('indexFull.tpl'); print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'seconds' => microtime(TRUE) - $time)); Twig {# Twig. indexFull.html #} <html> <head> <title>{{ title }}</title> </head> <body> <h2>An example with {{ title|title }}</h2> <b>Table with {{ number|escape}} rows</b> <table> {% for row in table %} <tr bgcolor="{{ cycle(['#aaaaaa', '#ffffff'], row.id) }}"> <td>{{ row.id }}</td> <td>{{ row.name }}</td> </tr> {% endfor %} </table> </body> </html> And the PHP code: // index.php $time = microtime(TRUE); $mem = memory_get_usage(); define('BASE_DIR', dirname(__file__)); require_once BASE_DIR . '/include/Twig/Autoloader.php'; Twig_Autoloader::register(); $loader = new Twig_Loader_Filesystem(BASE_DIR . '/twig/templates'); $twig = new Twig_Environment($loader, array( 'cache' => BASE_DIR . '/twig/compiled', 'auto_reload' => true )); $template = $twig->loadTemplate('indexFull.html'); $rows = 1000; $data = array(); for ($i = 0; $i < $rows; $i++) { $data[] = array('id' => $i, 'name' => "name {$i}"); } $template->display(array( 'number' => $rows, 'title' => 'twig', 'table' => $data )); print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'seconds' => microtime(TRUE) - $time)); Haanga {# Haanga. indexFull.html #} <html> <head> <title>{{ title }}</title> </head> <body> <h2>An example with {{ title|title }}</h2> <b>Table with {{ number|escape}} rows</b> <table> {% for row in table %} <tr bgcolor="{% cycle '#aaaaaa' '#ffffff' %}"> <td>{{ row.id }}</td> <td>{{ row.name }}</td> </tr> {% endfor %} </table> </body> </html> And the PHP code: // index.php $time = microtime(TRUE); $mem = memory_get_usage(); define('BASE_DIR', dirname(__file__)); require(BASE_DIR . '/include/Haanga.php'); Haanga::configure(array( 'template_dir' => BASE_DIR . '/haanga/templates', 'cache_dir' => BASE_DIR . '/haanga/compiled', )); $rows = 1000; $data = array(); for ($i=0; $i<$rows; $i++ ) { $data[] = array('id' => $i, 'name' => "name {$i}"); } Haanga::Load('indexFull.html', array( 'number' => $rows, 'title' => 'haanga', 'table' => $data )); print_r(array('memory' => (memory_get_usage() - $mem) / (1024 * 1024), 'seconds' => microtime(TRUE) - $time)); With template Inheritance With this test I use the same php file, changing template name from indexFull to index. Smarty {* Smarty. index.tpl*} {extends file="layout.tpl"} {block name=table} <table> {foreach $table as $row} <tr bgcolor="{cycle values="#aaaaaa,#ffffff"}"> <td>{$row.id}</td> <td>{$row.name}</td> </tr> {foreachelse} <tr><td>No items were found</td></tr> {/foreach} </table> {/block} {* Smarty. layout.tpl*} <html> <head> <title>{$title}</title> </head> <body> <h2>An example with {$title|capitalize}</h2> <b>Table with {$number|escape} rows</b> {block name=table}{/block} </body> </html> Twig {# Twig. index.html #} {% extends "layout.html" %} {% block table %} <table> {% for row in table %} <tr bgcolor="{{ cycle(['#aaaaaa', '#ffffff'], row.id) }}"> <td>{{ row.id }}</td> <td>{{ row.name }}</td> </tr> {% else %} <tr><td>No items were found</td></tr> {% endfor %} </table> {% endblock %} {# Twig. layout.html #} <html> <head> <title>{{ title }}</title> </head> <body> <h2>An example with {{ title|title }}</h2> <b>Table with {{ number|escape}} rows</b> {% block table %}{% endblock %} </body> </html> Haanga {% extends "layout.html" %} {# Haanga. index.html #} {% block table %} <table> {% for row in table %} <tr bgcolor="{% cycle '#aaaaaa' '#ffffff' %}"> <td>{{ row.id }}</td> <td>{{ row.name }}</td> </tr> {% endfor %} </table> {% endblock %} {# Haanga. layout.html #} <html> <head> <title>{{ title }}</title> </head> <body> <h2>An example with {{ title|title }}</h2> <b>Table with {{ number|escape}} rows</b> {% block table %}{% endblock %} </body> </html> Outcomes of the tests: (50 rows) Smarty Twig Haanga Simple template Memory: 0.684497 Time: 0.023710 Memory: 0.598434 Time: 0.025444 Memory: 0.124019 Time:  0.004004 Template Inheritance Memory: 0.685134 Time: 0.023761 Memory: 0.619461 Time: 0.028100 Memory: 0.133472 Time: 0.005005 (1000 rows) Smarty Twig Haanga Simple template Memory: 1.222743 Time: 0.094762 Memory: 1.033226 Time: 0.196187 Memory: 0.558811 Time: 0.043151 Template Inheritance Memory: 1.194095 Time: 0.090528 Memory: 1.054237 Time: 0.191694 Memory: 0.646381 Time: 0.044402 Haanga really rocks in the test. It’s the fastest in all cases and it’s the best using memory. The main problem I’ve seen with Haanga is the lack of documentation. When I wanted to use the cycle filter (to create the zebra style in the HTML table) I didn’t find anything about it. I had to browse the source code and finally I found it in one tests. Whereas Smarty documentation is brilliant and Twig is good enough. The HTML template syntax is almost the same with Twig and Haanga (in fact both of them are Django style). Smarty is a bit different but is very similar. The PHP part in Smarty looks like a bit old fashioned compared with Haanga and Twig, but it’s really easy to use. The performance of Twig and Smarty are similar. Twig is slightly better. But with simple templates i’ts almost the same. Follow Get every new post delivered to your Inbox. Join 1,046 other followers
__label__pos
0.909097
一步一步学习Vue(十) 本篇说一下组件通信的问题,父子组件通信,前面的博客中已有说明,vue也推荐props in,event out;兄弟节点通信如何做呢?官方其实也给出了实现方式,我们以下面的场景来实现一下: 上图中,实现如下功能:搜索表单组件中,包含各种搜索条件,当点击搜索按钮时,加载数据到列表组件中渲染。 这里会给出三种实现方式,不涉及合适与否,只为演示。 1、使用父组件进行封装,把所有操作都移到父组件中 2、搜索组件,触发事件到父组件,父组件监听到事件发生,则执行查询操作,传递props 到列表组件,这也是我们前面实现过的方式,这里简单写一个demo。 首先定义我们的组件:SearchComponent 、AppComponent、ListComponent <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>demo4</title> <script src="https://cdn.bootcss.com/vue/2.4.1/vue.js"></script> </head> <body> <div id="app"> <app></app> </div> <script> var SearchComponent = { template:` <div class="toolbar"> <input type="text" placeholder="keyword" v-model="keyword"/> <input type="text" placeholder="description" v-model="desc"/> <input type="button" value="search" @click="search()" /> </div> `, data:function(){ return { keyword:'', desc:'' } }, methods:{ search:function(){ this.$emit('onsearch',{keyword:this.keyword,desc:this.desc}); } } } var ListComponent = { template:` <div class="list" > {{list}} </div> `, props:['list'] } var AppComponent={ template:` <div class="container"> <search @onsearch="search($event)" ></search> <list :list="datas" ></list> </div> `, components:{ 'list':ListComponent, 'search':SearchComponent }, methods:{ search:function($e){ this.datas=JSON.stringify({ data:[], info:'info' }); } }, data:function(){ return { datas:null } } } var app=new Vue({ el:'#app', components:{ 'app':AppComponent } }); </script> </body> </html> 点击搜索按钮,运行效果如下: 上面的例子非常简单,而且所写代码在前面的博文中都有所介绍,这里就不详述了,在这里数据流流向如下: 1、点击按钮,数据由 search组件流向父组件 2、父组件监听onsearch ,监听到事件后,处理并给list赋值,此时数据由 父组件 流向 list组件 父组件这里的作用就是一个中转站,提供了一种数据流的中转功能。那么如果没有父组件,能否实现上述功能呢,毕竟我们不可能每次兄弟组件通信都创建一个多余父组件过来,这样如果嵌套层数过多也是很大的问题,对于兄弟组件通信的问题,官方也提到了叫做event bus的实现方式,下面我们就实现一下第二种方案,基于event bus: 修改我们的代码如下: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>demo4</title> <script src="https://cdn.bootcss.com/vue/2.4.1/vue.js"></script> </head> <body> <div id="app"> <search></search> <list></list> </div> <script> var eventBus = new Vue(); var SearchComponent = { template: ` <div class="toolbar"> <input type="text" placeholder="keyword" v-model="keyword"/> <input type="text" placeholder="description" v-model="desc"/> <input type="button" value="search" @click="search()" /> </div> `, data: function () { return { keyword: '', desc: '' } }, methods: { search: function () { // this.$emit('onsearch',{keyword:this.keyword,desc:this.desc}); eventBus.$emit('onsearch', { keyword: this.keyword, desc: this.desc }); } } } var ListComponent = { template: ` <div class="list" > {{list}} </div> `, data: function () { return { list: null } }, created: function () { var self = this; eventBus.$on('onsearch', function ($e) { console.log($e); self.list = JSON.stringify($e); }) } } var app = new Vue({ el: '#app', components: { // 'app':AppComponent 'list': ListComponent, 'search': SearchComponent } }); </script> </body> </html> 这里借助一个全局的vue空实例,来实现一个全局的eventbus,当然我们也可以使用或者实现自己的eventbus,这个是比较简单的,(大致思路是:定义一个回调列表数组,定义两个方法,一个on一个emit,on即是向回调数组push key 和对应的function,emit就是触发key对应的function)有兴趣的可以简单做一下,保存后运行即可。 对于简单的兄弟组件通信,其实这种方案或者第一种方案已经满足,但是如果兄弟节点过多或者组件层次很深的时候,使用第一种方案我们必须一层一层的传递几乎重复的代码,使用第二种方案所有组件又全部依赖于全局vue实例或者说全局eventbus,有没有更好的状态管理方案呢?能否把状态管理独立出来呢,这就是我们接下来要说的vuex。 Vuex 是一个专为 Vue.js应用程序开发的状态管理模式。它采用集中式存储管理应用的所有组件的状态,并以相应的规则保证状态以一种可预测的方式发生变化。Vuex 也集成到Vue 的官方调试工具 devtools extension,提供了诸如零配置的 time-travel调试、状态快照导入导出等高级调试功能,如果打开官网,你会看到上面这段话。 这都是什么乱七八糟的,可能让人看不明白,虽然看起来逼格很高,其实它只是做的比我们的eventbus 更高级一点: 每一个 Vuex 应用的核心就是 store(仓库)。"store" 基本上就是一个容器,它包含着你的应用中大部分的状态(state)。Vuex 和单纯的全局对象有以下两点不同: 1. Vuex 的状态存储是响应式的。当 Vue 组件从 store 中读取状态的时候,若 store 中的状态发生变化,那么相应的组件也会相应地得到高效更新。 2. 你不能直接改变 store 中的状态。改变 store 中的状态的唯一途径就是显式地提交(commit) mutations。这样使得我们可以方便地跟踪每一个状态的变化,从而让我们能够实现一些工具帮助我们更好地了解我们的应用。 vuex中包含很多概念和约定,今天我们就开始体验一下,话不多说,同样的功能基于vuex重构一下: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>demo4</title> <script src="https://cdn.bootcss.com/vue/2.4.1/vue.js"></script> <script src="https://cdn.bootcss.com/vuex/2.3.1/vuex.js"></script> </head> <body> <div id="app"> <search></search> <list></list> </div> <script> // var eventBus = new Vue(); var store = new Vuex.Store({ state: { list: null }, mutations: { search: function (state, payload) { state.list = JSON.stringify(payload); } } }) var SearchComponent = { template: ` <div class="toolbar"> <input type="text" placeholder="keyword" v-model="keyword"/> <input type="text" placeholder="description" v-model="desc"/> <input type="button" value="search" @click="search()" /> </div> `, data: function () { return { keyword: '', desc: '' } }, methods: { search: function () { this.$store.commit("search",{ keyword: this.keyword, desc: this.desc }) //eventBus.$emit('onsearch', { keyword: this.keyword, desc: this.desc }); } } } var ListComponent = { template: ` <div class="list" > {{list}} </div> `, computed:{ list:function(){ return this.$store.state.list; } } } var app = new Vue({ el: '#app', store:store, components: { // 'app':AppComponent 'list': ListComponent, 'search': SearchComponent } }); </script> </body> </html> 这里我们创建了一个全局store,store是唯一的,里面保存着所有的状态(这种状态建议是全局的或者共享的,我们这里假设list组件中的state属于共享,大家不要较真,而search中的state属于组件本身状态),我们做如下约定:不要直接修改状态,要通过提交mutations来修改状态,mutations相当于在react中使用setState去修改状态一样。直接修改会运行时异常。 针对上面的代码,主要包括如下几个知识点: 1、vuex的实例化:直接new Vuex.Store ,创建全局唯一store,通过配置参数,设置state(全局共享的)、mutations(只支持同步操作) 2、vuex和vue的联系,通过new Vue实例时,注入store,这里和前文中注入router类似,注入后,在任何子组件中,就可以通过this.$store来访问store了 3、store中的state是响应式的,所以建议定义为组件计算属性,每次通过mutations提交修改,则可直接驱动view的变化。 本节主要引入vuex,算是vuex的开篇,不介绍过多内容,让我们有一个简单的认识,接下来会向介绍vue-router一样,慢慢的深入其它的方方面面。敬请期待。 本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。 发表于 我来说两句 0 条评论 登录 后参与评论 相关文章 来自专栏陈仁松博客 ASP.NET Core 'Microsoft.Win32.Registry' 错误修复 今天在发布Asp.net Core应用到Azure的时候出现错误InvalidOperationException: Cannot find compilati... 4798 来自专栏Golang语言社区 【Golang语言社区】GO1.9 map并发安全测试 var m sync.Map //全局 func maintest() { // 第一个 YongHuomap := make(map[st... 4658 来自专栏Ceph对象存储方案 Luminous版本PG 分布调优 Luminous版本开始新增的balancer模块在PG分布优化方面效果非常明显,操作也非常简便,强烈推荐各位在集群上线之前进行这一操作,能够极大的提升整个集群... 3045 来自专栏大内老A The .NET of Tomorrow Ed Charbeneau(http://developer.telerik.com/featured/the-net-of-tomorrow/) Exciti... 30810 来自专栏落花落雨不落叶 canvas画简单电路图 58611 来自专栏转载gongluck的CSDN博客 cocos2dx 打灰机 #include "GamePlane.h" #include "PlaneSprite.h" #include "BulletNode.h" #include... 5286 来自专栏C# DotNet加密方式解析--非对称加密     新年新气象,也希望新年可以挣大钱。不管今年年底会不会跟去年一样,满怀抱负却又壮志未酬。(不过没事,我已为各位卜上一卦,卦象显示各位都能挣钱...)... 4798 来自专栏一个爱瞎折腾的程序猿 sqlserver使用存储过程跟踪SQL USE [master] GO /****** Object: StoredProcedure [dbo].[sp_perfworkload_trace_s... 2000 来自专栏一个会写诗的程序员的博客 Spring Reactor 项目核心库Reactor Core Non-Blocking Reactive Streams Foundation for the JVM both implementing a Reactiv... 2102 来自专栏魂祭心 原 canvas绘制clock 4014 扫码关注云+社区
__label__pos
0.932202
Instantly share code, notes, and snippets. Embed What would you like to do? Detecting if in editing mode, or just editing a single table row - (void)tableView:(UITableView *)tableView willBeginEditingRowAtIndexPath:(NSIndexPath *)indexPath { _isEditingIndividualRow = YES; [super tableView:tableView willBeginEditingRowAtIndexPath:indexPath]; } - (void)setEditing:(BOOL)editing animated:(BOOL)animated { [super setEditing:editing animated:animated]; BOOL inEditingMode = editing && !_isEditingIndividualRow; if (inEditingMode) // ...set up toolbar accordingly } Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
__label__pos
0.909123
Your spaces are filled with all the innovative and groundbreaking work (and cat GIFs) that your teams are doing every day. But, sometimes it's hard to know whether someone was answering your question or talking about something else. You can quote messages or start threads to help keep track of messages. Quotes Quotes work great if you want to remind everyone of something that someone said or if you're just answering someone's specific question. It's probably not anything that other people likely want to discuss. You can't quote another quote so you're not making it easy for other people to join in. But, that's fine because not everything needs to be discussed. Think of it as restating a comment for emphasis or answering a who, what, where, when type of question. So, use a quote when you want to: • reiterate someone's previous comment for the group's benefit. • answer specific who, what, where, when questions. You can also see where a quoted message was originally posted. If you see a quoted message in a space, click or tap Go to message to see the original message in context. Clicking the quote button to start a message quoting a previous message. Threads When you start a thread, though, you're opening up a discussion. You're inviting other people to join in and have a focused conversation. Threads often focus on how and why questions that are more open-ended. People can reply to threads and share their opinions about this specific topic. (But, avoid derailing an ongoing thread with something off-topic. Just start a new discussion in the space instead.) Start a thread if you want to: • reply with expectation that others will join. • participate in a targeted discussion with anyone in space. • answer how or why questions. Clicking the reply to thread button to continue a conversation.
__label__pos
0.622105
OmniSciDB  c1a53651b2  All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages StringDictionaryProxy.h Go to the documentation of this file. 1 /* 2  * Copyright 2022 HEAVY.AI, Inc. 3  * 4  * Licensed under the Apache License, Version 2.0 (the "License"); 5  * you may not use this file except in compliance with the License. 6  * You may obtain a copy of the License at 7  * 8  * http://www.apache.org/licenses/LICENSE-2.0 9  * 10  * Unless required by applicable law or agreed to in writing, software 11  * distributed under the License is distributed on an "AS IS" BASIS, 12  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13  * See the License for the specific language governing permissions and 14  * limitations under the License. 15  */ 16  17 #ifndef STRINGDICTIONARY_STRINGDICTIONARYPROXY_H 18 #define STRINGDICTIONARY_STRINGDICTIONARYPROXY_H 19  20 #include "Logger/Logger.h" // For CHECK macros 21 #include "Shared/misc.h" 22 #include "StringDictionary.h" 23  24 #include "ThirdParty/robin_hood/robin_hood.h" 25  26 #include <optional> 27 #include <ostream> 28 #include <shared_mutex> 29 #include <string> 30 #include <string_view> 31 #include <tuple> 32 #include <vector> 33  34 namespace StringOps_Namespace { 35 struct StringOpInfo; 36 } 37  38 // used to access a StringDictionary when transient strings are involved 40  public: 43  StringDictionaryProxy(std::shared_ptr<StringDictionary> sd, 44  const shared::StringDictKey& string_dict_key, 45  const int64_t generation); 46  47  const shared::StringDictKey& getDictKey() const noexcept { return string_dict_key_; }; 48  49  bool operator==(StringDictionaryProxy const&) const; 50  bool operator!=(StringDictionaryProxy const&) const; 51  52  int32_t getOrAdd(const std::string& str) noexcept; 53  StringDictionary* getDictionary() const noexcept; 54  int64_t getGeneration() const noexcept; 55  76  std::vector<int32_t> getTransientBulk(const std::vector<std::string>& strings) const; 77  int32_t getOrAddTransient(const std::string&); 78  int32_t getOrAddTransient(const std::string_view); 79  // Not currently used 80  std::vector<int32_t> getOrAddTransientBulk(const std::vector<std::string>& strings); 81  int32_t getIdOfString(const std::string& str) const; 83  const std::string& str) const; // disregard generation, only used by QueryRenderer 84  std::string getString(int32_t string_id) const; 85  std::vector<std::string> getStrings(const std::vector<int32_t>& string_ids) const; 86  std::pair<const char*, size_t> getStringBytes(int32_t string_id) const noexcept; 87  88  template <typename T> 90  size_t const offset_; 91  std::vector<T> vector_map_; 92  int64_t num_untranslated_strings_{-1}; 93  T range_start_{0}; 94  T range_end_{0}; 95  96  public: 97  // +1 is added to skip string_id=-1 reserved for INVALID_STR_ID. id_map[-1]==-1. 98  TranslationMap(uint32_t const tran_size, uint32_t const dict_size) 99  : offset_(tran_size + 1), vector_map_(offset_ + dict_size) {} 100  TranslationMap(uint32_t const tran_size, uint32_t const dict_size, const T& init_val) 101  : offset_(tran_size + 1), vector_map_(offset_ + dict_size, init_val) {} 102  TranslationMap(TranslationMap const&) = delete; 103  TranslationMap(TranslationMap&&) = default; 104  bool empty() const { return vector_map_.size() == 1; } 105  inline size_t getIndex(int32_t const id) const { return offset_ + id; } 106  std::vector<T> const& getVectorMap() const { return vector_map_; } 107  size_t size() const { return vector_map_.size(); } 108  size_t numTransients() const { return offset_ - 1; } 109  size_t numNonTransients() const { return vector_map_.size() - offset_; } 110  T* data() { return vector_map_.data(); } 111  T const* data() const { return vector_map_.data(); } 112  int32_t domainStart() const { return -static_cast<int32_t>(offset_); } 113  int32_t domainEnd() const { return static_cast<int32_t>(numNonTransients()); } 114  void setRangeStart(const int32_t range_start) { range_start_ = range_start; } 115  void setRangeEnd(const int32_t range_end) { range_end_ = range_end; } 116  T rangeStart() const { return range_start_; } 117  T rangeEnd() const { return range_end_; } 118  119  // Next two methods are currently used by buildUnionTranslationMapToOtherProxy to 120  // short circuit iteration over ids after intersection translation if all 121  // ids translated. Currently the private num_untranslated_strings_ is initialized 122  // to a -1 sentinel to signify that the value has not been calculated, which we 123  // CHECK against in the getter numUntranslatedStrings() method 124  // to represent that the num_untranslated_strings_ field has been uninitialized 125  size_t numUntranslatedStrings() const { 126  CHECK_GE(num_untranslated_strings_, 0L); 127  return static_cast<size_t>(num_untranslated_strings_); 128  } 129  void setNumUntranslatedStrings(const size_t num_untranslated_strings) { 130  num_untranslated_strings_ = static_cast<int64_t>(num_untranslated_strings); 131  } 132  T* storageData() { return vector_map_.data() + offset_; } 133  T& operator[](int32_t const id) { return vector_map_[getIndex(id)]; } 134  T operator[](int32_t const id) const { return vector_map_[getIndex(id)]; } 135  friend std::ostream& operator<<(std::ostream& os, TranslationMap<T> const& sdp_map) { 136  return os << "IdMap(offset_(" << sdp_map.offset_ << ") vector_map_" 137  << shared::printContainer(sdp_map.vector_map_) << ')'; 138  } 139  }; 140  142  143  IdMap initIdMap() const { 144  return IdMap( 146  } 147  168  TranslationMap<Datum> buildNumericTranslationMap( 169  const std::vector<StringOps_Namespace::StringOpInfo>& string_op_infos) const; 170  172  const StringDictionaryProxy* dest_proxy, 173  const std::vector<StringOps_Namespace::StringOpInfo>& string_op_infos) const; 174  176  StringDictionaryProxy* dest_proxy, 177  const std::vector<StringOps_Namespace::StringOpInfo>& string_op_types) const; 178  188  size_t storageEntryCount() const; 189  196  size_t transientEntryCount() const; 197  206  size_t entryCount() const; 207  208  void updateGeneration(const int64_t generation) noexcept; 209  210  std::vector<int32_t> getLike(const std::string& pattern, 211  const bool icase, 212  const bool is_simple, 213  const char escape) const; 214  215  std::vector<int32_t> getCompare(const std::string& pattern, 216  const std::string& comp_operator) const; 217  218  std::vector<int32_t> getRegexpLike(const std::string& pattern, const char escape) const; 219  221  using is_transparent = void; // Used by robin_hood to activate heterogenous hashing 222  // std::string and char const* are implicitly cast to std::string_view. 223  size_t operator()(std::string_view const key) const { 224  return robin_hood::hash_bytes(key.data(), key.size()); 225  } 226  }; 228  using is_transparent = void; // Used by robin_hood to activate heterogenous equal 229  // std::string and char const* are implicitly cast to std::string_view. 230  bool operator()(std::string_view const lhs, std::string_view const rhs) const { 231  return lhs == rhs; 232  } 233  }; 234  235  // The std::string must live in the map, and std::string const* in the vector. As 236  // desirable as it might be to have it the other way, string addresses won't change 237  // in the robin_hood::unordered_node_map when new strings are added, but may change 238  // in a std::vector (and robin_hood::unordered_flat_map). 239  using TransientMap = robin_hood::unordered_node_map<std::string, 240  int32_t, 241  HeterogeneousStringHash, 243  244  const std::vector<std::string const*>& getTransientVector() const { 245  return transient_string_vec_; 246  } 247  248  // INVALID_STR_ID = -1 is reserved for invalid string_ids. 249  // Thus the greatest valid transient string_id is -2. 250  static unsigned transientIdToIndex(int32_t const id) { 251  constexpr int max_transient_string_id = -2; 252  return static_cast<unsigned>(max_transient_string_id - id); 253  } 254  255  static int32_t transientIndexToId(unsigned const index) { 256  constexpr int max_transient_string_id = -2; 257  return static_cast<int32_t>(max_transient_string_id - index); 258  } 259  260  // Iterate over transient strings, then non-transients. 262  263  // Union strings from both StringDictionaryProxies into *this as transients. 264  // Return map of old string_ids to new string_ids. 266  267  private: 268  std::string getStringUnlocked(const int32_t string_id) const; 269  size_t transientEntryCountUnlocked() const; 270  size_t entryCountUnlocked() const; 271  size_t persistedC() const; 272  template <typename String> 273  int32_t getOrAddTransientImpl(String); 274  template <typename String> 275  int32_t lookupTransientStringUnlocked(const String& lookup_string) const; 276  size_t getTransientBulkImpl(const std::vector<std::string>& strings, 277  int32_t* string_ids, 278  const bool take_read_lock) const; 279  template <typename String> 280  size_t transientLookupBulk(const std::vector<String>& lookup_strings, 281  int32_t* string_ids, 282  const bool take_read_lock) const; 283  template <typename String> 284  size_t transientLookupBulkUnlocked(const std::vector<String>& lookup_strings, 285  int32_t* string_ids) const; 286  template <typename String> 287  size_t transientLookupBulkParallelUnlocked(const std::vector<String>& lookup_strings, 288  int32_t* string_ids) const; 289  291  const StringDictionaryProxy* dest_proxy, 292  const std::vector<StringOps_Namespace::StringOpInfo>& string_op_infos) const; 293  294  std::shared_ptr<StringDictionary> string_dict_; 297  // Holds pointers into transient_str_to_int_ 298  std::vector<std::string const*> transient_string_vec_; 299  int64_t generation_; 301  302  // Return INVALID_STR_ID if not found on string_dict_. Don't lock or check transients. 303  template <typename String> 304  int32_t getIdOfStringFromClient(String const&) const; 305  template <typename String> 306  int32_t getOrAddTransientUnlocked(String const&); 307  308  friend class StringLocalCallback; 309  friend class StringNetworkCallback; 310 }; 311 #endif // STRINGDICTIONARY_STRINGDICTIONARYPROXY_H void eachStringSerially(StringDictionary::StringCallback &) const int32_t getOrAddTransientImpl(String) void setNumUntranslatedStrings(const size_t num_untranslated_strings) const shared::StringDictKey string_dict_key_ std::pair< const char *, size_t > getStringBytes(int32_t string_id) const noexcept std::vector< int32_t > getLike(const std::string &pattern, const bool icase, const bool is_simple, const char escape) const size_t transientEntryCountUnlocked() const const std::vector< std::string const * > & getTransientVector() const TranslationMap(uint32_t const tran_size, uint32_t const dict_size, const T &init_val) size_t entryCount() const Returns the number of total string entries for this proxy, both stored in the underlying dictionary a... int32_t getIdOfStringNoGeneration(const std::string &str) const std::string getStringUnlocked(const int32_t string_id) const size_t storageEntryCount() const Returns the number of string entries in the underlying string dictionary, at this proxy&#39;s generation_... TranslationMap(uint32_t const tran_size, uint32_t const dict_size) StringDictionary * getDictionary() const noexcept #define CHECK_GE(x, y) Definition: Logger.h:306 size_t transientLookupBulkUnlocked(const std::vector< String > &lookup_strings, int32_t *string_ids) const StringDictionaryProxy const & operator=(StringDictionaryProxy const &)=delete size_t transientLookupBulk(const std::vector< String > &lookup_strings, int32_t *string_ids, const bool take_read_lock) const std::string getString(int32_t string_id) const IdMap buildIntersectionTranslationMapToOtherProxyUnlocked(const StringDictionaryProxy *dest_proxy, const std::vector< StringOps_Namespace::StringOpInfo > &string_op_infos) const size_t transientLookupBulkParallelUnlocked(const std::vector< String > &lookup_strings, int32_t *string_ids) const int32_t getIdOfStringFromClient(String const &) const std::vector< int32_t > getTransientBulk(const std::vector< std::string > &strings) const Executes read-only lookup of a vector of strings and returns a vector of their integer ids... TranslationMap< Datum > buildNumericTranslationMap(const std::vector< StringOps_Namespace::StringOpInfo > &string_op_infos) const Builds a vectorized string_id translation map from this proxy to dest_proxy. std::vector< int32_t > getCompare(const std::string &pattern, const std::string &comp_operator) const TranslationMap< int32_t > IdMap static constexpr int32_t INVALID_STR_ID std::shared_ptr< StringDictionary > string_dict_ IdMap transientUnion(StringDictionaryProxy const &) std::vector< std::string const * > transient_string_vec_ void setRangeEnd(const int32_t range_end) int32_t lookupTransientStringUnlocked(const String &lookup_string) const std::vector< std::string > getStrings(const std::vector< int32_t > &string_ids) const size_t getTransientBulkImpl(const std::vector< std::string > &strings, int32_t *string_ids, const bool take_read_lock) const size_t operator()(std::string_view const key) const static int32_t transientIndexToId(unsigned const index) void updateGeneration(const int64_t generation) noexcept size_t transientEntryCount() const Returns the number of transient string entries for this proxy,. IdMap buildUnionTranslationMapToOtherProxy(StringDictionaryProxy *dest_proxy, const std::vector< StringOps_Namespace::StringOpInfo > &string_op_types) const StringDictionaryProxy(StringDictionaryProxy const &)=delete void setRangeStart(const int32_t range_start) int32_t getOrAddTransient(const std::string &) int32_t getOrAddTransientUnlocked(String const &) bool operator!=(StringDictionaryProxy const &) const std::vector< int32_t > getRegexpLike(const std::string &pattern, const char escape) const int32_t getOrAdd(const std::string &str) noexcept bool operator==(StringDictionaryProxy const &) const size_t getIndex(int32_t const id) const std::vector< T > const & getVectorMap() const std::vector< int32_t > getOrAddTransientBulk(const std::vector< std::string > &strings) IdMap buildIntersectionTranslationMapToOtherProxy(const StringDictionaryProxy *dest_proxy, const std::vector< StringOps_Namespace::StringOpInfo > &string_op_infos) const robin_hood::unordered_node_map< std::string, int32_t, HeterogeneousStringHash, HeterogeneousStringEqual > TransientMap nvtxRangeId_t range_start(const char *) Definition: nvtx_helpers.h:247 PrintContainer< CONTAINER > printContainer(CONTAINER &container) Definition: misc.h:107 std::shared_timed_mutex shared_mutex const shared::StringDictKey & getDictKey() const noexcept void range_end(nvtxRangeId_t) Definition: nvtx_helpers.h:253 bool operator()(std::string_view const lhs, std::string_view const rhs) const size_t persistedC() const int32_t getIdOfString(const std::string &str) const static unsigned transientIdToIndex(int32_t const id) int64_t getGeneration() const noexcept
__label__pos
0.835699
Difference between revisions of "Project:Tool Access Control/ACNet" From London Hackspace Wiki Jump to navigation Jump to search Line 11: Line 11:   * Figure out a strategy of syncing the membership database to the local access database, as currently manual (mentar)   * Figure out a strategy of syncing the membership database to the local access database, as currently manual (mentar)   * Move the acserver to a separate VM running on the server downstairs (tgreer + mentar)   * Move the acserver to a separate VM running on the server downstairs (tgreer + mentar) * Code a basic web ui for adding tools/maintainers/nodes + * Code a basic web ui for adding tools/maintainers/nodes (mentar)      == System diagram ==   == System diagram == Revision as of 01:11, 11 July 2013 Summary This page aims to scope out the different projects that work together as part of the Access Control Network. The main components are: • ACNode - The clients that sit on the tool that is being controlled and manage physical access by reading the rfid card. • ACServer - The server stores the authentication information and pulls membership information from Turing over JSON and stores is at a SQLite DB • Membership DB - Secured storage of membership data. TODO • Add a secondary acnode and test multiple acnode functionality (Sol) • Figure out a strategy of syncing the membership database to the local access database, as currently manual (mentar) • Move the acserver to a separate VM running on the server downstairs (tgreer + mentar) • Code a basic web ui for adding tools/maintainers/nodes (mentar) System diagram <graphviz border='frame' format='png' > digraph rfboard{ rankdir=TD; size="10,5!"; subgraph cluster_0 { node [shape=box,style=filled,color=lightgrey]; label = "ACServer"; local_db [label="local db",shape=box]; httpserver [label="HTTP server",shape=box]; httpserver -> local_db; local_db -> httpserver; } acnode1 [label="ACNode",shape=box]; acnode2 [label="ACNode",shape=box]; acnode3 [label="ACNode",shape=box]; membershipdb [label="Membership DB",shape=box]; acnode1 -> httpserver; acnode2 -> httpserver; acnode3 -> httpserver; httpserver -> acnode1; httpserver -> acnode2; httpserver -> acnode3; membershipdb-> httpserver; } </graphviz> AC Node Currently proposed (and built) by Solexious Link AC Server 2 versions Python Flask implementation started by ms7821 can be located here further improved by asoko PHP Code Igniter implementation developed by mentar and Oskar located here Usage: curl http://[server]:[port]/[node_id]/card/[card_id] For testing it's installed on babbage port 1234 Membership DB Running on Turing VM slice (hosted outside the space as it has personal data). Accessed in JSON format.
__label__pos
0.781122
Guest please help me to find the shortest distance between the given lines please help me to find the shortest distance between the given lines Question Image Grade:12 1 Answers Piyush Kumar Behera 436 Points 4 years ago @Kis Hor I have provided the solution below in the image please refer to the image for the solution   Please Approve the solution. Think You Can Provide A Better Answer ? Provide a better Answer & Earn Cool Goodies See our forum point policy ASK QUESTION Get your questions answered by the expert for free
__label__pos
0.993234
Domain Name System (DNS) resolution service learn more… | top users | synonyms 8 votes 6answers 11k views What are the pros and cons of using an alternative DNS instead of the ISP DNS server? I searched the web for answers, but I haven't found anything conclusive. What are the advantages/disadvantages of using an alternative DNS (for example, OpenDNS or Google DNS) as opposed to the ... 1 vote 1answer 691 views Open Dns on my router giving problems Recently i switched to OpenDns with Familyshield.I have applied the dns setting on my belkin wireless router. If i dont use internet for 3-4 hours,my ISP asks me to re-login again,then i cant access ... 1 vote 1answer 7k views Prevent users from changing LAN settings I'd like to stop my kids from changing either the LAN or WAN settings on PC or Laptop. I'm using OpenDNS and have it set to use their DNS servers, but my kids keep removing the DNS settings to access ... 6 votes 2answers 653 views How can I store DNS cache in case the DNS server goes down? I'm using OpenDNS's DNS servers. Today, it went down for half an hour. I'm wondering if I can store a cached copy of the DNS records of the websites that I visit often. Google Chrome does this ... 5 votes 2answers 61k views How do I set DNS Servers on Raspberry Pi? I want my Raspberry Pi to use OpenDNS to resolve domain names. How can I modify this setting? 3 votes 1answer 397 views How exactly does DNS work? I'm learning pentesting from books. So far I thought I know about DNS but now I'm completely lost and confused. well, I know what happens when you enter domain name in your browser: Say, I've ... 2 votes 5answers 9k views Bypassing Router's DNS Settings Is there a way to bypass my ISP provided CPE/router's DNS settings? I'd like to use OpenDNS but I am unable to access the administrator acount of the CPE. I tried logging in using the default ... 1 vote 3answers 674 views FIltering Adult Content I'm using OpenDNS now for over a month . But it doesn't filter Adult Content although i specified High Restriction in the settings . It automatically sends the IP to its server (?) but it doesn't ... 0 votes 2answers 2k views How can I tell if my ISP is redirecting my DNS queries? I've attempted to use some DNS services like OpenDNS, and no matter what I do the DNS queries don't return the expected results. Watching the packet traffic on my firewall, I can see the queries go ... 0 votes 1answer 554 views OpenDNS not working from 1 Ubuntu machine I have OpenDNS set up on my router, however I'm getting a strange situation. The OpenDNS Welcome page confirms that OpenDNS is working on my Windows XP laptop and on an Ubuntu netbook but not on my ...
__label__pos
0.918941
Two copy / pastes Discussion in 'Computer Software and Operating Systems' started by slayerspud, Apr 4, 2008. Apr 4, 2008 Two copy / pastes by slayerspud at 12:18 PM (1,298 Views / 0 Likes) 7 replies 1. slayerspud OP Member slayerspud GBAtemp Fan Joined: May 20, 2006 Messages: 387 Country: United Kingdom First, I don't know if this is the correct section so feel free to move it. Basically I want to say press a keyboard button for one piece of text, and another for a different piece of text. Obviously I can already use CTRL + V for one piece of text, but I need it for two different pieces of text. I am using Word 2003 or there abouts. Anyone know how to do this? Thanks   2. Bob Evil Member Bob Evil The Department of Home-Made Insecurity Joined: Sep 27, 2006 Messages: 3,783 Location: Out of the corner of your eye Country: United Kingdom Why not just do them one at a time? You can have multiple documents open in Word, so one at a time is easy enough. Or, if both pieces are from the same source, simply highlight the entire source, from the start of piece one, to the end of piece two, paste it across, then highlight and delete all the bits you don't need.   3. Jiggah Member Jiggah GBAtemp Maniac Joined: Nov 9, 2002 Messages: 1,223 Country: United States Uh...use Clipboard?   4. Bob Evil Member Bob Evil The Department of Home-Made Insecurity Joined: Sep 27, 2006 Messages: 3,783 Location: Out of the corner of your eye Country: United Kingdom Or that ... the options are many [​IMG]   5. slayerspud OP Member slayerspud GBAtemp Fan Joined: May 20, 2006 Messages: 387 Country: United Kingdom I know about clipboard but do you know how to bind a certain item to a certain key?   6. noisound Member noisound GBAtemp Regular Joined: Feb 23, 2008 Messages: 181 Country: Antarctica if you just need to copy 2 different parts of your document (pieces of text) in word, you hold ctrl and highlight the text you want. try playing around with ctrl highlighling to get the feel of it. you can also use shift, double click for 1 word, undo the highlighting (selection) by rehighlighting the text   7. suppachipmunk Member suppachipmunk GBAtemp Fan Joined: Jan 14, 2007 Messages: 400 Country: United States ? Are you looking to copy one paragraph of text in a document, skip one paragraph, then copy the following paragraph ? You can start highlight one piece of text, then Hold CTRL, then highlight the next piece that you want. You can do this as many times as needed while skimming through the text. then when you are done highlighting, right-click, and copy, then paste to your Word document. I think this is what you want to do? EDIT: noisound beat me to it... WAIT, are you referring to "macro" your keys? Like how you have a certain button to bring up the start menu, have a certain button to bring up your dvd playing software??? Somthing like that? You would either have to have that option on your laptop. if you have a desktop, you should be able to use the software that came with your keyboard.   8. noisound Member noisound GBAtemp Regular Joined: Feb 23, 2008 Messages: 181 Country: Antarctica @suppachipmunk better luck next time [​IMG] i know you spent a long time typing that post probably, when you opened it earlier but only just now got to it lol i did have to sit for a while to think about slayerspud's basic description ;P n1 going the extra mile on macro suggestion !   Share This Page
__label__pos
0.987146
Network + Card Set Information Author: MartyrX ID: 136734 Filename: Network + Updated: 2012-02-21 20:34:25 Tags: Network Folders: Description: Network + Exam Show Answers: Home > Flashcards > Print Preview The flashcards below were created by user MartyrX on FreezingBlue Flashcards. What would you like to do? 1. A type of cable containing twisted-wire pairs that are not only individually insulated, but also surrounded by a shielding made of a metallic substance such as foil. STP (shielded twisted pair) 2. A form of cable that contains one or several glass or plastic fibers in its core. Data is transmitted via pulsing light sent from a laser or light-emitting diode (LED) through the central fiber (or fibers). Fiber-optic cables offer significantly higher throughput than copper-based cables. They may be single-mode or multimode and typically use wave division multiplexing to carry multiple signals. fiber-optic cable 3. A type of interference that may be caused by motors, power lines, televisions, copiers, fluorescent lights, or other sources of electrical activity. EMI (electromagnetic interference) 4. A type of transmission in which signals may travel in both directions over a medium simultaneously. May also be called, simply, "duplex." full-duplex 5. An IEEE Physical layer standard for achieving 10-Mbps throughput over coaxial copper cable. Thinnet is also known as 10Base-2. Its maximum segment length is 185 meters, and it relies on a bus topology. Thinnet 6. An IEEE Physical layer standard for achieving a maximum of 10-Mbps throughput over coaxial copper cable. Thicknet is also known as 10Base-5. Its maximum segment length is 500 meters, and it relies on a bus topology. Thicknet 7. The delay between the transmission of a signal and its receipt. Latency 8. The unwanted signals, or interference, from sources near network cabling, such as electrical motors, power lines, and radar. Noise 9. A type of cable that consists of a central metal conducting core, which might be solid or stranded and is often made of copper, surrounded by an insulator, a braided metal shielding, called braiding, and an outer cover, called the sheath or jacket. Coaxial cable, called "coax" for short, was the foundation for Ethernet networks in the 1980s. Today it's used to connect cable Internet and cable TV systems. coaxial cable 10. The amount of data that a medium can transmit during a given period of time. Throughput is usually measured in megabits (1,000,000 bits) per second, or Mbps. The physical nature of every transmission media determines its potential throughput. Throughput 11. A relatively short section (usually between 3 and 25 feet) of cabling with connectors on both ends. patch cable 12. A twisted pair patch cable in which the termination locations of the transmit and receive wires on one end of the cable are reversed. crossover cable 13. A type of cable similar to telephone wiring that consists of color-coded pairs of insulated copper wires, each with a diameter of 0.4 to 0.8 mm, twisted around each other and encased in plastic coating. twisted pair 14. As opposed to analog signals, digital signals are composed of pulses that can have a value of only 1 or 0. Digital 15. A measure of the difference between the highest and lowest frequencies that a medium can transmit. bandwidth 16. A type of cable in which the terminations on one end are exactly the reverse of the terminations on the other end. It is used for serial connections between routers and consoles or other interfaces. rollover cable 17. The extent to which a signal has weakened after traveling a given distance. attenuation 18. A type of transmission in which signals may travel in both directions over a medium, but in only one direction at a time. half-duplex 19. A twisted pair patch cable in which the wire terminations in both connectors follow the same scheme. straight-through cable 20. A form of transmission in which signals are modulated as radiofrequency analog pulses with different frequency ranges. Unlike baseband, broadband technology does not involve binary encoding. The use of multiple frequencies enables a broadband system to operate over several channels and, therefore, carry much more data than a baseband system. broadband 21. A form of transmission in which digital signals are sent through direct current pulses applied to a wire. This direct current requires exclusive use of the wire's capacity, so baseband systems can transmit only one signal, or one channel, at a time. Every device on a baseband system shares a single channel. Baseband 22. A type of cabling that consists of one or more insulated wire pairs encased in a plastic sheath. - As its name implies, UTP does not contain additional shielding for the twisted pairs. As a result, UTP is both less expensive and less resistant to noise than STP. UTP (unshielded twisted pair) 23. A computer that runs a desktop operating system and connects to a network. Workstation 24. The devices, data, and data storage space provided by a computer, whether stand-alone or shared. Resources 25. A computer that manages Web site services, such as supplying a Web page to multiple users on demand. Web server 26. A computer on the network that requests resources or services from another computer on a network. In some cases, a client could also act as a server. The term client may also refer to the user of a client workstation or a client software application installed on the workstation. Client 27. A network of computers and other devices that is confined to a relatively small space, such as one building or even one office. LAN (local area network) 28. The software that runs on a server and enables the server to manage data, users, groups, security, applications, and other networking functions. The most popular network operating systems are Microsoft Windows NT, Windows 2000 Server, and Windows Server 2003, UNIX, Linux, and Novell NetWare. NOS (network operating system) 29. A computer that enables resource sharing by other computers on the same network. Host 30. A network that spans a long distance and connects two or more LANs. WAN (wide area network) 31. A person working on a computer on a different network or in a different geographical location from the LAN's server. remote user 32. The means through which data are transmitted and received. Transmission media may be physical, such as wire or cable, or atmospheric (wireless), such as radio waves. transmission media 33. A computer on the network that manages shared resources. Servers usually have more processing power, memory, and hard disk space than clients. They run network operating software that can manage not only data, but also users, groups, security, and applications on the network. Server 34. The skills such as customer relations, leadership ability, and dependability, which are not easily measured, but are nevertheless important in a networking career. soft skills 35. A group of computers and other devices (such as printers) that are connected by and can exchange data via some type of transmission media, such as a cable, a wire, or the atmosphere. Network 36. A standard method or format for communication between network devices. Protocols ensure that data are transferred whole, in sequence, and without error from one node on the network to another. Protocol 37. A network in which every computer can communicate directly with every other computer. By default, no computer on a peer-to-peer network has more authority than another. However, each computer can be configured to share only some of its resources and keep other resources inaccessible to other nodes on the network. peer-to-peer network 38. The device that enables a workstation to connect to the network and communicate with other computers. NICs are manufactured by several different companies and come with a variety of specifications that are tailored to the workstation's and the network's requirements. NICs are also called network adapters. NIC (network interface card) 39. A network that uses centrally administered computers, known as servers, to enable resource sharing for and to facilitate communication between the other computers on the network. client/server network 40. The seventh layer of the OSI model. Application layer protocols enable software programs to negotiate formatting, procedural, security, synchronization, and other requirements with the network. Application layer 41. The lower sublayer of the Data Link layer. The MAC appends the physical address of the destination computer onto the frame. MAC (Media Access Control) sublayerMAC (Media Access Control) sublayer 42. The fourth layer of the OSI model. In the Transport layer protocols ensure that data are transferred from point A to point B reliably and without errors. Transport layer services include flow control, acknowledgment, error correction, segmentation, reassembly, and sequencing. Transport layer 43. A core protocol in the TCP/IP suite that operates in the Network layer of the OSI model and provides information about how and where data should be delivered. IP is the subprotocol that enables TCP/IP to internetwork. IP (Internet Protocol) 44. The upper sublayer in the Data Link layer. The LLC provides a common interface and supplies reliability and flow control services. LLC (Logical Link Control) sublayer 45. A core protocol in the TCP/IP suite that operates in the Network layer of the OSI model and provides information about how and where data should be delivered. IP is the subprotocol that enables TCP/IP to internetwork. IP (Internet Protocol) 46. The third layer in the OSI model. Protocols in the Network layer translate network addresses into their physical counter Internet Protocol See IP. Network layer 47. The lowest, or first, layer of the OSI model. Protocols in the Physical layer generate and detect signals so as to transmit and receive data over a network medium. These protocols also set the data transmission rate and monitor data error rates, but do not provide error correction. Physical layer 48. The process of wrapping one layer's PDU with protocol information so that it can be interpreted by a lower layer. For example, Data Link layer protocols encapsulate Network layer packets in frames. Encapsulate 49. A type of Transport layer protocol that requires the establishment of a connection between communicating nodes before it will transmit data. connection oriented 50. A response generated at the Transport layer of the OSI model that confirms to a sender that its frame was received. The ACK packet is the third of three in the three-step process of establishing a connection. ACK (acknowledgment) 51. A networking technology developed by IBM in the 1980s. It relies upon direct links between nodes and a ring topology, using tokens to allow nodes to transmit data. token ring 52. A unique identifying number for a network node that follows a hierarchical addressing scheme and can be assigned through operating system software. Network addresses are added to data packets and interpreted by protocols at the Network layer of the OSI model. network address 53. The fifth layer in the OSI model. The Session layer establishes and maintains communication between two nodes on the network. It can be considered the "traffic cop" for network communications. Session layer 54. A 12-character string that uniquely identifies a network node. The manufacturer hard codes the MAC address into the NIC. This address is composed of the block ID and device ID. MAC address 55. A method of gauging the appropriate rate of data transmission based on how fast the recipient can accept data. flow control 56. The second layer in the OSI model. The Data Link layer bridges the networking media with the Network layer. Its primary function is to divide the data it receives from the Network layer into frames that can then be transmitted by the Physical layer. Data Link layer 57. The sixth layer of the OSI model. Protocols in the Presentation layer translate between the application and the network. Here, data are formatted in a schema that the network can understand, with the format varying according to the type of network used. The Presentation layer also manages data encryption and decryption, such as the scrambling of system passwords. Presentation layer 58. A networking technology originally developed at Xerox in the 1970s and improved by Digital Equipment Corporation, Intel, and Xerox. Ethernet, which is the most common form of network transmission technology, follows the IEEE 802.3 standard. Ethernet 59. A type of Transport layer protocol that services a request without requiring a verified session and without guaranteeing delivery of data. Connectionless 60. The upper sublayer in the Data Link layer. The LLC provides a common interface and supplies reliability and flow control services. LLC (Logical Link Control) sublayer 61. A package for data that includes not only the raw data, or "payload," but also the sender's and recipient's addressing and control information. Frames are generated at the Data Link layer of the OSI model and are issued to the network at the Physical layer. Frame 62. The Network layer address assigned to nodes to uniquely identify them on a TCP/IP network. IP addresses consist of 32 bits divided into four octets, or bytes. IP address (Internet Protocol address) What would you like to do? Home > Flashcards > Print Preview
__label__pos
0.95582
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free. I need to serialize all inputs from a form into a JSON string. With the help of this post, I can successfully create a valid string as below: {"input01":"value01","input02":"value02","input03":"value03"} However, when I try to use the string to POST data using jQuery's Ajax function, it seems to add backslashes to the string, resulting in the JSON string being sent using GET rather than POST. The loaded PHP page returns a $_GET array of: [{\"input01\":\"value01\",\"input02\":\"value02\",\"input03\":\"value03\"}] => I have tested the JSON string using alert() to confirm the structure is correct before being used in the AJAX function. Additionally, if I just manually type in the valid JSON string, the AJAX posts the data correctly. My code is as follows: var dataJSON = $.toJSON($('#form').serializeObject()); alert(dataJSON); $.ajax({ type: "POST", url: "ajax.php", data: 'Query01=01&Query02=02', dataType: 'json', success: function(data){ if (data==1){ $('#wrap').load('ajax.php',dataJSON); } } }); share|improve this question      You're calling .ajax(), then making another ajax request with .load(). Is that correct? –  Roatin Marth Oct 14 '09 at 12:49      Roatin, that is correct. The example above is simplified from my actual script. The actual script posts data using $.ajax that needs to be validated, if successfully validated, the $.load function loads HTML generated from data posted by the JSON string. –  ticallian Oct 14 '09 at 12:58 4 Answers 4 This is the default behaviour of $.ajax(). You can change it by setting the processData option to false. See $.ajax() options. data Object, String Data to be sent to the server. It is converted to a query string, if not already a string. It's appended to the url for GET-requests. See processData option to prevent this automatic processing. Object must be Key/Value pairs. If value is an Array, jQuery serializes multiple values with same key i.e. {foo:["bar1", "bar2"]} becomes '&foo=bar1&foo=bar2'. and processData Boolean Default: true By default, data passed in to the data option as an object (technically, anything other than a string) will be processed and transformed into a query string, fitting to the default content-type "application/x-www-form-urlencoded". If you want to send DOMDocuments, or other non-processed data, set this option to false. share|improve this answer      I've tested the info you've listed above without any luck. From what i can see the $.ajax options do not effect the nested $.load function. Any ideas on how i could change the same options for $.load? –  ticallian Oct 14 '09 at 12:39      @ticallian I am not sure why you need the load function inside the success function anyway. Can you not just get everything you need in the $.ajax request using the json value? –  Metropolis Oct 7 '10 at 17:47 Be sure that you echo $_GET['varwithurl'] not echo json_encode($_GET['varwithurl']) as many php web examples do. I send data with url with $.ajax() and don't see unwanted backslashes in php script. share|improve this answer up vote 0 down vote accepted After scouring Google and the jQuery site, i've come to the personal conclusion that the $.load function will convert any variable passed to it as a querystring (As my original problem above outlined). If you wish to pass a JSON string through it, it has to be manually typed. To get around this, I used the low level $.ajax function instead. An advantage of using this method meant I could also send POST data using the standard .serialize() function rather than having to convert my form data into JSON. My final code: var formData = $('#form').serialize(); $.ajax({ type: "POST", url: "ajax.php", data: 'Query01=01&Query02=02', dataType: 'json', success: function(data){ if (data==1){ $.ajax({ type: "POST", url: "ajax.php", data: formData, success: function(html){ $("#wrap").replaceWith(html); } }); } } }); If anyone else has a solution, please comment. share|improve this answer      Since you're using php as server-side. I guess, you could have used the php function stripslashes() –  Manish Shrestha 14 hours ago <html> <head> <script src="resources/jquery-2.1.0.js"></script> <script src="resources/jquery.serializejson.min.js"></script> </head> <body> <script> $(document).ready(function(){ $("#simplepost").click(function(e){ var MyForm = $("#appForm").serializeJSON(); console.log(MyForm); $.ajax( { url: "rest/emp/create", type: "POST", data: JSON.stringify(MyForm), contentType: "application/json; charset=utf-8", dataType: "json", success:function(maindta){ alert(maindta); }, error: function(jqXHR, testStatus, errorThrown){ alert(errorThrown); } }); e.preventDefault(); //STOP default action }); }); </script> <h2>Hello World!</h2> <form id="appForm" method="POST"> EmployeeID:<input type="text" name='id' value="" /> Employee Name:<input type="text" name="name" value=""/> <br> <input type="button" value="Submit" id="simplepost" /> </form> </body> </html> share|improve this answer      Code dumps without explanation are rarely helpful. Please consider adding some context to your answer. –  Chris Oct 10 '14 at 23:45 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.554898
Question: What Kind Of Math Is Used In Computer Science? What is the hardest computer science course? Hardest Computer Science ClassesData Structures and Algorithms.Discrete Mathematics.Operating Systems.Automata Theory.Calculus. These are the 5 hardest computer science classes that you’ll take during your undergraduate (in no particular order). Technically, Calculus isn’t a Comp Sci class. However, it is required for most C.S. programs.. Is computer science just coding? Computer science is about solving problems using computers and coding (or programming) is about implementing these solutions. Computer scientists can be like architects who design the house—but do not have to build it. … Coding (Programming) is just one of these seven areas. What math do you need for computer science? At UMCP, it’s two semesters of calculus, one semester of linear algebra, and one semester of statisics. Some of the upper level computer science courses, such as number theory and discrete mathematics, are also listed as mathematics courses. Is computer science a lot of math? Also, studying Computer Science or Computer Engineering involves a lot of math, but this is not really necessary on the field. … Math is also necessary to understand algorithms complexity, but you are not going to invent new algorithms, at least in the first few years of programming. How difficult is computer science? Computer Science is a hard discipline to learn. But, if you are motivated and devote sufficient time to studying the discipline, then it is possible to learn Computer Science. Initially Computer Science seems hard because learning to program is challenging. … However, most of people learn skills step-by-step over time. Which is the toughest course in the world? Here is the list of Toughest Exam in the World:Master Sommelier Diploma.Union Public Service Commission(UPSC)Indian Engineering Services(IES)Graduate Record Exams(GRE)The United States Medical Licensing Examination(USMLE)National Admissions Test for Law(LNAT)Cisco Certified Inter-networking Expert(CCIE)More items…• What type of math is used in coding? These include college algebra, statistics, calculus I and calculus II. These classes are applied in two different ways for computer programming. The most obvious is using the math taught to solve complex equations. Is calculus used in computer science? In Computer Science, Calculus is used for machine learning, data mining, scientific computing, image processing, and creating the graphics and physics engines for video games, including the 3D visuals for simulations. Calculus is also used in a wide array software programs that require it. How is math used in computer science? Algebra is used in computer science in the development of algorithms and software for working with mathematical objects. It is also used to design formulas that are used in numerical programs and for complete scientific computations. Can I be a software engineer if I’m bad at math? Then you will likely not be a good software engineer. Most programmers don’t need any mathematics beyond basic arithmetic, but they do need the underlying skills of being able to think logically and to formulate problems in a way that they can be formally reasoned about. Sure, but you’ll need to learn some math first. What subjects are required for computer science? Some core computer science courses you may cover include theory of computation, fundamentals of computer science, compliers and operating systems, information theory, basic programming, systems and architecture, software development and testing, web applications and databases, algorithms and data structures, and … Are hackers good at math? If you want to be able to hack. You need rudimentary algebra at most but if you want to break security and understand security then you would do well to verse yourself in Cryptography which has math in abstract algebra and more. … You don’t need any math to be a hacker. Is a level maths hard? So yes, in essence, A-Level Maths is more difficult than GCSE Maths. It’s a step up in independency as you’re expected to learn content on your own. It’s also just generally harder content! The syllabus builds on and challenges GCSE Maths, and so you’ll find that it ramps up in difficulty as you progress. What is the hardest subject? Top Ten Hardest School Subjects Physics. Physics by far is the HARDEST SUBJECT IN HIGH SCHOOL. … Foreign Language. I was in fourth grade when I learned Japanese, it was my favourite subject and I loved learning a new language. … Chemistry. Good Lord, I hate Chemistry. … Math. … Calculus. … English. … Biology. … Trigonometry.More items… What should I study if I like math? Physics and Engineering is all a bunch of calculus and differential equations. Programming and Computer Science is a bunch of linear algebra and numerical systems. Almost any field in science will rely heavily on mathematics. Think more about what you want to do with your degree than the degree itself. Can you do computer science without maths? Math is compulsory subject for every candidate in begining of there course if u dont want maths in your subjects then you will do your graduation in arts with computer science. … Brother In the computer science without mathematics you can’t do anything. What can I study without maths? These are the courses you can study without credit in MathematicsAdult Education.Civil Law.Conflict and Peace Resolution.Counsellor Education.Drama / Dramatic / Performing Art.Education and Biology.Education and Efik/Ibibio.Education and Geography.More items…• What is the hardest college major? The HardestBiology. A biology major can prepare students for careers in the medical and science fields. … Computer Science. While computer science is one of the hardest college majors, graduates often secure lucrative careers. … Civil Engineering. … Mechanical Engineering. … Social Science.
__label__pos
0.999681
Question 8 - Arithmetic Reasoning Practice Test for the ASVAB Four out of \(28\) students had to go to summer school. What is the ratio of students who did not have to go to summer school to the total number of students in this group, expressed in its lowest terms? Create a FREE profile to save your progress and scores! Create a Profile Already signed up? Sign in Practice Test Downloads Study offline with printer-friendly downloads. Get access to 910 printable practice questions and more. Upgrade to Premium
__label__pos
0.935296
Home System Programming Learn to Write C++ Programs on Linux Learn to Write C++ Programs on Linux 0 6176 C++-Writing-Programs-on-Linux So far, we have been writing and executing C++ programs using Turbo C++ and Dev-C++. Both IDEs run on Microsoft Windows machines. Now, it is time to switch a little to another operating system, and learn how to develop using C++ on Linux. Let’s get started. Installing C++ Compiler on Linux Box In your Linux (Red Hat/CentOS/Fedora) machine, type the following command as root to install the C++ compiler: To verify if the GCC compiler has been installed successfully • Use the rpm –qa command: 1 • Use the which command: 2 Writing your First C++ Program on Linux 1. From your terminal, open a new file for editing using the vim command: 2. In the vim editor, type the following code: 3. Save and exit the file 4. To compile your new C++ program, type the following command from your terminal: If the compilation goes without errors, no output will be displayed. 3 5. An executable file is created in the current directory, with default name a.out. 4 6. To run the program, execute the generated executable the same way you execute any Linux executable. 5 Congratulations! Specifying Name for the Resulting Executable As I told you, compiling C++ programs without specifying options (as we did above) produces an executable file named a.out. If you want to specify a name of your choice for the resulting executable, you have two options: either to rename the default a.out after it is created, or to specify the executable filename on compilation using the –o option. Executing the named executable should produce the same output: 6 Executing System Commands from C++ Programs It is necessary to be able to communicate with the system by executing operating system commands on need. The system() function allows you to run system commands from C++ code. For the compiler to recognize this function, and to compile successfully, the stdlib.h library file should be invoked. Example Write a program that displays the following system info: • Hostname. • IP address. • Date and time. • File systems utilization. Type the following code in a new file (say system_commands.cc) Compile the source file: 7 Now, execute it: 8 The getpid() and getppid() Functions To get the Process ID (PID) of the current process, the getpid() is used. Example The following program prints the process ID of the program: Compile and execute the program: 9 Notice the pid_t data type. It is a numeric data type that is used for process IDs. The getppid() function returns the process ID of the parent process. Example Modify the previous program to print the PPID also. When compiled and executed, this program should print both the process ID of the program, and its parent process ID. 10 The fork() Function The fork() function is used to create a new (child) process by duplicating the calling (parent) process. The function returns the PID of the new child process (on success). On failure, -1 is returned instead. In the child process, 0 is returned on success. Example The following program starts a child process. If this operation is successful, the PIDs of both the current process and the child process are printed. If not, the program prints a message that it failed to start the child process. Let’s see what we get when this program is run: 11 Notice that the child process is copy of the parent. That is why four lines got printed, instead of two. Notice also the difference between values of the child process ID: inside the parent, the PID of the child is returned by the fork() function and printed. On the other hand, the fork() function returns 0 to the child process. Summary • To write C/C++ programs on UNIX/Linux machines, the GCC compiler is needed. • C++ programs are written and saved as .cc files. • The c++ and g++ commands compile and link C++ source files. If the compilation goes without errors, an executable file is created, with default name a.out. • The resulting executable can be executed the same way UNIX/Linux executables are executed. • The system() function is used to run system commands from C++ code. • The getpid() and getppid() functions return the process ID and the parent process ID of the program. • The fork() function provides a way for a process to run another program (process). In the next article, we are going to talk about Input/Output and File Handling. See you. NO COMMENTS LEAVE A REPLY Please enter your comment! Please enter your name here close-link Shares Share This
__label__pos
0.924055
   Deploying a form  Author Message  Deploying a form I have put a form together that in itself contains all of the functionality needed for it to work. It uses two free tables also. I can not seem to get this program to compile and run independant properly. When I do, I get an empty parent window with warning that the tables are read only. To access the form I have to go to new from the file menu that is there by default. I get errors on the form that the controlsource for the textboxes is missing (they seem fine when I check). I would like to see the form run by itself without a parent frame and the tables read/write, just like when the form is ran in the environment. Please Help ASAP. Thank you. Jacob Wed, 09 Feb 2005 00:19:58 GMT      [ 1 post ]   Relevant Pages  1. Deploying a Form 2. Deploying VFP application as MSI for dotnet 3. VFP8 application deployed with SetupFactory 4. Security with deploying app 5. Dynamically Deploying Packages 6. Updating deployed database 7. Updating a deployed table 8. How to deploy a VFP7 application 9. Error deploying a VFP 7.0 program. 10. Deploying VFP COM - Foxisapi 11. How to deploy .OCX ? 12. How to distribute/deploy VFP 7 application ?     Powered by phpBB® Forum Software
__label__pos
0.985786
Aptitude - Problems On Trains Test Test Instructions : 1. The Test is 1hr duration. 2. The Test Paper consists of 30 questions. The maximum marks are 30. 3. All the questions are multiple choice question type with three options for each question. 4. Out of the three options given for each question, only one option is the correct answer. 5. Each question is allotted 1 mark for each correct response. 6. 0.25 will be deducted for incorrect response of each question. Start Test    Time Left : 00 : 30    : 00 A train travelling at 48 kmph completely crosses another train having half its length and travelling in opposite direction at 42 kmph, in 12 seconds. It also passes a railway platform in 45 seconds. The length of the platform is             A 270 metres long train running at the speed of 120 kmph crosses another train running in opposite direction at the speed of 80 kmph in 9 seconds. What is the length of the other train?             A train 108 m long moving at a speed of 50 km/hr crosses a train 112 m long coming from opposite direction in 6 seconds. The speed of the second train is:             Two trains are running at 40 km/hr and 20 km/hr respectively in the same direction. Fast train completely passes a man sitting in the slower train in 5 seconds. What is the length of the fast train?             A train 240 m long passes a pole in 24 seconds. How long will it take to pass a platform 650 m long?             A train speeds past a pole in 15 seconds and a platform 100 m long in 25 seconds. Its length is:             Two trains of equal lengths take 10 seconds and 15 seconds respectively to cross a telegraph post. If the length of each train be 120 metres, in what time (in seconds) will they cross each other travelling in opposite direction?             Two trains are running in opposite directions with the same speed. If the length of each train is 120 metres and they cross each other in 12 seconds, then the speed of each train (in km/hr) is:             A train overtakes two persons who are walking in the same direction in which the train is going, at the rate of 2 kmph and 4 kmph and passes them completely in 9 and 10 seconds respectively. The length of the train is:             A train 360 m long is running at a speed of 45 km/hr. In what time will it pass a bridge 140 m long?             Two trains, each 100 m long, moving in opposite directions, cross each other in 8 seconds. If one is moving twice as fast the other, then the speed of the faster train is:             Two trains of equal length are running on parallel lines in the same direction at 46 km/hr and 36 km/hr. The faster train passes the slower train in 36 seconds. The length of each train is:             A train travelling at a speed of 75 mph enters a tunnel 3 (1/2) miles long. The train is 1/4 mile long. How long does it take for the train to pass through the tunnel from the moment the front enters to the moment the rear emerges?             A train 125 m long passes a man, running at 5 km/hr in the same direction in which the train is going, in 10 seconds. The speed of the train is:             A goods train runs at the speed of 72 kmph and crosses a 250 m long platform in 26 seconds. What is the length of the goods train?             A train moves past a telegraph post and a bridge 264 m long in 8 seconds and 20 seconds respectively. What is the speed of the train?             A jogger running at 9 kmph alongside a railway track in 240 metres ahead of the engine of a 120 metres long train running at 45 kmph in the same direction. In how much time will the train pass the jogger?             Two trains 140 m and 160 m long run at the speed of 60 km/hr and 40 km/hr respectively in opposite directions on parallel tracks. The time (in seconds) which they take to cross each other, is:             Two trains are moving in opposite directions @ 60 km/hr and 90 km/hr. Their lengths are 1.10 km and 0.9 km respectively. The time taken by the slower train to cross the faster train in seconds is:             A train overtakes two persons walking along a railway track. The first one walks at 4.5 km/hr. The other one walks at 5.4 km/hr. The train needs 8.4 and 8.5 seconds respectively to overtake them. What is the speed of the train if both the persons are walking in the same direction as the train?             How many seconds will a 500 metre long train take to cross a man walking with a speed of 3 km/hr in the direction of the moving train if the speed of the train is 63 km/hr?             Two, trains, one from Howrah to Patna and the other from Patna to Howrah, start simultaneously. After they meet, the trains reach their destinations after 9 hours and 16 hours respectively. The ratio of their speeds is:             Two goods train each 500 m long, are running in opposite directions on parallel tracks. Their speeds are 45 km/hr and 30 km/hr respectively. Find the time taken by the slower train to pass the driver of the faster one             A train 110 metres long is running with a speed of 60 kmph. In what time will it pass a man who is running at 6 kmph in the direction opposite to that in which the train is going?             A train 800 metres long is running at a speed of 78 km/hr. If it crosses a tunnel in 1 minute, then the length of the tunnel (in meters) is:             The length of the bridge, which a train 130 metres long and travelling at 45 km/hr can cross in 30 seconds, is:             A train running at the speed of 60 km/hr crosses a pole in 9 seconds. What is the length of the train?             Two trains running in opposite directions cross a man standing on the platform in 27 seconds and 17 seconds respectively and they cross each other in 23 seconds. The ratio of their speeds is:             A 300 metre long train crosses a platform in 39 seconds while it crosses a signal pole in 18 seconds. What is the length of the platform?             Two stations A and B are 110 km apart on a straight line. One train starts from A at 7 a.m. and travels towards B at 20 kmph. Another train starts from B at 8 a.m. and travels towards A at a speed of 25 kmph. At what time will they meet?             Note: • Click the 'Submit Test' button given in the bottom of this page to Submit your answers. • Test will be submitted automatically if the time expired. • Don't refresh the page.
__label__pos
0.995354
How to modify output codec for logstash I'm not expecting a step by step on this. But here is the backstory and what I hope to achieve. I have logstash decoding netflow and putting it out using json_lines This is working well, but it's very verbose and too large to store. How can I modify which fields are put out in this file? I'm hoping to pare it down to just a few fields. Secondarily Is it possible to convert it to plain text? When i use plain I get a timestamp and %message I appreciate any direction on this. You might find a prune filter helpful for removing top-level fields (you can either whitelist fields to keep or blacklist fields to remove). mutate can also remove fields, and in the worst case you can resort to ruby. What do you mean by plain text? A plain codec by default will emit the timestamp, hostname, and contents of [message]. You can tell it to use a different format. If you really only do want a handful of fields you could supply the list of fields in the format option of the codec and not bother pruning the rest. codec => plain { format => "foo is %{foo}. bar is %{bar}" } } Hey, Okay. So the prune filter definitely looks like what I want but I'm having a hard time getting what I'm looking for. my flow logs are as such {"host":"1.1.1.1","@timestamp":"2019-09-27T20:09:48.000Z","@version":"1","netflow":{"ipv4_dst_addr":"2.2.2.2","src_as":16509,"ipv4_src_addr":"3.3.3.3","dst_as":0,"in_bytes":84,"first_switched":"2019-09-27T20:04:48.832Z","last_switched":"2019-09-27T20:04:48.832Z","input_snmp":95,"in_pkts":1,"flow_seq_num":206430370,"l4_dst_port":2048,"flowset_id":256,"version":9,"protocol":1,"l4_src_port":0,"ipv4_next_hop":"4.4.4.4"}} And my config is this input { udp { port => 9995 codec => netflow } } filter{ mutate { copy => { "[netflow][ipv4_dst_addr][ipv4_src_addr][low_seq_num]" => "what_i_want" } } prune { whitelist_names => [ "what_i_want" ] } } output { s3{ access_key_id => "X" secret_access_key => "X" bucket => "1logflowtest" codec => "json_lines" id => "NetflowV9" #encoding => "gzip" size_file => "1024000000" time_file => "60" } file { path => "/var/log/logstash/test.log" codec => "json_lines" } } and the output is {} heh. I'm looking to cherry pick fields. I'd like to start with just Source and destination and go from there. Your filter should look more like mutate { copy => { "[netflow][ipv4_dst_addr]" => "[ipv4_dst_addr]" "[netflow][ipv4_src_addr]" => "[ipv4_src_addr]" "[netflow][flow_seq_num]" => "[flow_seq_num]" } } prune { whitelist_names => [ "flow_seq_num", "ipv4_dst_addr", "ipv4_src_addr", "@timestamp" ] } Thank you, @Badger I was able to build on what you posted to fine tune something that was relevant. One last question to avoid making a new thread. Is there any way to have the full output sent off to another destination or would that require a separate conf file ? Not sure I understand the question. You can have multiple outputs in a configuration. But you clearly know that, since you have two outputs in your existing configuration. Sorry, let me clarify. I would like one output to be unaffected by the prune. You can do that using pipeline to pipeline communications with a forked path pattern. If you are running on an old version you can do it by using a clone filter, then making the prune and output conditional upon the type set by the clone.
__label__pos
0.681589
Project General Profile Actions Added in 1.8.8 /var [-gnspB] <%var> [[= ]value] Sets the value of local variable %var to the specified value. Syntax can be either var %var = value or var %var value. Multiple variables can be set in one line using comma as separator, var %var1 = value, %var2 = value2. See also /set, /unset, /inc, /dec, $var. Switches -s - Display variable assignment value. -g - Creates a global variable instead. -n - Treat the value as plain text, even if arithmetic operators are used. -i - Initializes a variable only if it does not already exist as a local or global variable. -p - Permits value to be 2 literal double quotes and permits value to end with a single $chr(32) space. Also enables -n switch behavior. -B - Performs a $calcint calculation instead of regular $calc when arithmetic operators are used. (AdiIRC only) Parameters <%var> - The variable to set. [ [= ]value ] - Value to set the variable to. (can be a single arithmetic expression) Example alias factorial { var %result = 1, %x = $1 while (%x) { var %result = %result * $v1 dec %x } return %result } ;Returns 3628800 //echo -ag $factorial(10) Updated by Per Amundsen 9 months ago · 20 revisions Also available in: PDF HTML TXT
__label__pos
0.710337
// -*- tab-width: 2; indent-tabs-mode: nil -*- #ifdef HAVE_CONFIG_H #include "config.h" #endif #include #include #include #include #include #include #include #include #include #include #include #include #include "interface/richards_simulation.hh" #include "solver/util_grid_creator.hh" // #include // System and DUNE Headers //=============================================================== // Main program with grid setup //=============================================================== /** * \mainpage DORiE \n * __Dune Operated Richards Equation Solving Environment__ \n * _by Lukas Riedel, Felix Riexinger, Dion Häfner_ \n * \n * Easy access links for the most important documentation pages: * \see main Main Program Function * \see Dune::Dorie::RichardsEquationParameter Class handling Richards Equation Parameter query functions * \see Dune::Dorie::FlowBoundary Class handling Boundary Condition query functions * \see Dune::Dorie::FlowSource Class handling Source Term query functions */ template using Sim = Dune::Dorie::RichardsSimulation; template using Simplex = Dune::Dorie::RichardsSimulationTraits, Dune::GeometryType::BasicType::simplex>,order>; template using Cube = Dune::Dorie::RichardsSimulationTraits, Dune::GeometryType::BasicType::cube>,order>; template using CubeAdaptive = Dune::Dorie::RichardsSimulationTraits, Dune::GeometryType::BasicType::cube>,order>; /// Main Program Function: Initialize parameter file reader, build grid and call Richards Solver. /** As simplex and rectangular grids demand different FiniteElementMaps, the program calls different functions for these tasks. * The objects and types are then passed to the generic solver function.\n * The UG Grid and FEM templates need the dimension as _constexpr_ at compile time, * so we need the tedious dim and FEorder queries. * \see RichardsSolver Main functions for assembling operators and solving the problem * \see RichardsSolverSimplex Helper for building simplex Grid Function Spaces * \see RichardsSolverRectangular Helper for building rectangular Grid Function Spaces * \see Dune::Dorie::Traits Variable type definitions * \check Parameter file is specified * \check Output directory is writable * \check Finite Element Order is supported * \check Grid type is supported * \check Dimensions are supported */ int main(int argc, char** argv) { try{ Dune::Timer timer; //Initialize Mpi Dune::MPIHelper& helper = Dune::MPIHelper::instance(argc, argv); if (argc!=2) DUNE_THROW(Dune::IOError,"No parameter file specified!"); const std::string inifilename = argv[1]; // Read ini file Dune::ParameterTree inifile; Dune::ParameterTreeParser ptreeparser; ptreeparser.readINITree(inifilename,inifile); // Allow for a debugger as gdb or lldb to hook into the process, even if run in parallel // (by manually setting the variable i to a nonzero value) bool debugMode = inifile.get("misc.debugMode"); if (debugMode) { int i = 0; char hostname[256]; gethostname(hostname, sizeof(hostname)); if(helper.rank()==0) std::cout << "Debug mode activated. Use your debugger to set the variable 'i' to a value > 0 in each process" << std::endl; printf("PID %d on %s ready for attachment\n", getpid(), hostname); fflush(stdout); while (0 == i) sleep(5); } // Read necessary variables const std::string gtype = inifile.get("grid.gridType"); const int dim = inifile.get("grid.dimensions"); const int FEorder = inifile.get("grid.FEorder"); const int verbose = inifile.get("output.verbose"); const std::string outputPath = inifile.get("output.outputPath"); const bool adaptivity = inifile.get("adaptivity.useAdaptivity"); Dune::Dorie::AdaptivityPolicy adapt_policy = Dune::Dorie::AdaptivityPolicy::None; if (adaptivity) { adapt_policy = Dune::Dorie::AdaptivityPolicy::WaterFlux; } // Attempt to create output directory mkdir(outputPath.c_str(), S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH); int result = access(outputPath.c_str(), W_OK); if (result != 0) DUNE_THROW(Dune::IOError,"Output folder " << outputPath << " not writable"); if (helper.rank()==0){ std::cout << "INPUT FILE: " << inifilename << std::endl; std::cout << "BC FILE: " << inifile.get("boundary.file") << std::endl; std::cout << "OUTPUT PATH: " << inifile.get("output.outputPath") << std::endl; if (verbose>0){ if(Dune::MPIHelper::isFake) std::cout << "MPI: SEQUENTIAL RUN" << std::endl; else std::cout << "MPI: PARALLEL RUN ON " << helper.size() << " PROCESS(ES)" << std::endl; } } if (dim==2) { if (gtype == "gmsh"){ Dune::Dorie::GridCreator> grid_creator(inifile, helper); auto grid_mapper = grid_creator.get_mapper(); switch(FEorder){ case 1:{ Sim> sim(inifile, grid_mapper, helper); sim.set_policy(adapt_policy); sim.run(); break; } case 2:{ Sim> sim(inifile, grid_mapper, helper); sim.set_policy(adapt_policy); sim.run(); break; } // case 3:{ * // Sim> sim(inifile, grid_mapper, helper); // sim.set_policy(adapt_policy); // sim.run(); // break; // } default: DUNE_THROW(Dune::NotImplemented,"Finite Element Order (grid.FEorder) not supported!"); } } else if (gtype == "rectangular"){ if(adaptivity){ Dune::Dorie::GridCreator> grid_creator(inifile, helper); auto grid_mapper = grid_creator.get_mapper(); switch(FEorder){ case 1:{ Sim> sim(inifile, grid_mapper, helper); sim.set_policy(adapt_policy); sim.run(); break; } case 2:{ Sim> sim(inifile, grid_mapper, helper); sim.set_policy(adapt_policy); sim.run(); break; } case 3:{ Sim> sim(inifile, grid_mapper, helper); sim.set_policy(adapt_policy); sim.run(); break; } default: DUNE_THROW(Dune::NotImplemented,"Finite Element Order (grid.FEorder) not supported!"); } } else{ // no adaptivity Dune::Dorie::GridCreator> grid_creator(inifile, helper); auto grid_mapper = grid_creator.get_mapper(); switch(FEorder){ case 1:{ Sim> sim(inifile, grid_mapper, helper); sim.run(); break; } case 2:{ Sim> sim(inifile, grid_mapper, helper); sim.run(); break; } case 3:{ Sim> sim(inifile, grid_mapper, helper); sim.run(); break; } default: DUNE_THROW(Dune::NotImplemented,"Finite Element Order (grid.FEorder) not supported!"); } } } else DUNE_THROW(Dune::NotImplemented,"Grid Type not supported!"); } else if (dim==3) { if (gtype == "gmsh"){ Dune::Dorie::GridCreator> grid_creator(inifile, helper); auto grid_mapper = grid_creator.get_mapper(); switch(FEorder){ // case 1:{ * // Sim> sim(inifile, grid_mapper, helper); // sim.set_policy(adapt_policy); // sim.run(); // break; // } // case 2:{ * // Sim> sim(inifile, grid_mapper, helper); // sim.set_policy(adapt_policy); // sim.run(); // break; // } // case 3:{ * // Sim> sim(inifile, grid_mapper, helper); // sim.set_policy(adapt_policy); // sim.run(); // break; // } default: DUNE_THROW(Dune::NotImplemented,"Finite Element Order (grid.FEorder) not supported!"); } } else if (gtype == "rectangular"){ if(adaptivity){ Dune::Dorie::GridCreator> grid_creator(inifile, helper); auto grid_mapper = grid_creator.get_mapper(); switch(FEorder){ case 1:{ Sim> sim(inifile, grid_mapper, helper); sim.set_policy(adapt_policy); sim.run(); break; } case 2:{ Sim> sim(inifile, grid_mapper, helper); sim.set_policy(adapt_policy); sim.run(); break; } // case 3:{ * // Sim> sim(inifile, grid_mapper, helper); // sim.set_policy(adapt_policy); // sim.run(); // break; // } default: DUNE_THROW(Dune::NotImplemented,"Finite Element Order (grid.FEorder) not supported!"); } } else{ // no adaptivity Dune::Dorie::GridCreator> grid_creator(inifile, helper); auto grid_mapper = grid_creator.get_mapper(); switch(FEorder){ case 1:{ Sim> sim(inifile, grid_mapper, helper); sim.run(); break; } case 2:{ Sim> sim(inifile, grid_mapper, helper); sim.run(); break; } // case 3:{ * // Sim> sim(inifile, grid_mapper, helper); // sim.run(); // break; // } default: DUNE_THROW(Dune::NotImplemented,"Finite Element Order (grid.FEorder) not supported!"); } } } else DUNE_THROW(Dune::NotImplemented,"Grid Type not supported!"); } // grid_dim != 2,3 else{ DUNE_THROW(Dune::NotImplemented,"Number of dimensions (grid.dimensions) not supported!"); } if(helper.rank()==0){ std::cout << "PROGRAM TERMINATED SUCCESSFULLY" << std::endl; std::cout << "::: Execution time " << std::setw(12) << std::setprecision(4) << std::scientific << timer.elapsed() << std::endl; } return 0; } catch (Dune::Exception &e){ std::cerr << "Dune reported error: " << e << std::endl; return 1; } catch (...){ std::cerr << "Unknown exception thrown!" << std::endl; throw; return 1; } }
__label__pos
0.986474
Copyright ORELA Math: Probability Chapter Exam Exam Instructions: Choose your answers to the questions and click 'Next' to see the next set of questions. You can skip questions if you would like and come back to them later with the yellow "Go To First Skipped Question" button. When you have completed the practice exam, a green submit button will appear. Click it to see your results. Good luck! Page 1 Question 1 1. Jessie has a deck of 52 regular playing cards and a bag of six marbles. In the bag, there are two blue marbles, three green marbles, and one white marble. What is the probability of Jessie drawing an ace from the deck of cards and a blue marble from the bag? Question 2 2. The probability of winning a prize at a ball toss at a carnival is 2/7. What is the probability of not winning a prize? Question 3 3. Solve the expression 5P4 (P = permutation ) Question 4 4. Jimmy is making multi-flavored ice cream cones by scooping in different flavors one at a time. Jimmy has 6 different flavors but can only put 3 flavors in each cone. The order of the flavors is important to him as it affects how he tastes each ice cream. How many different arrangements of cones can Jimmy make? Question 5 5. Mr. Garcia always dresses in a shirt, slacks, and dress shoes. There are 64 different outfits he can make with them. If he has 8 shirts and 2 pairs of shoes, then how many pairs of slacks does he have? Page 2 Question 6 6. In probability, a/an _____ is an event of a sample space. Question 7 7. Lisa has a two-sided coin with heads and tails. She also has a spinner with four colors: green, blue, red, and yellow. What is the probability of Lisa flipping the coin and getting heads and spinning the spinner to land on green? Question 8 8. All of the letters that spell MISSISSIPPI are put into a bag. What is the probability of selecting a vowel, and then after replacing the letter, also drawing an S? Question 9 9. Solve 8! (factorial) Question 10 10. Solve the expression 7P2 (P = permutation) Page 3 Question 11 11. The letters that spell out the state CALIFORNIA are cut and placed in a bag. What is the probability that the 3rd letter selected will be a C if the first two letters selected were both I's? (Letters were not replaced) Question 12 12. Annie writes the numbers 1 through 10 on note cards. She flips the cards over so she cannot see the number and selects three cards from the stack. What is the probability that she has selected the cards numbered 1, 2, and 3? Question 13 13. Twenty students compete in a school-wide marathon and each student is of comparable running ability. Of the 20 students, 15 were boys and 5 were girls. What is the probability that boys will place 1st, 2nd, and 3rd in the marathon? Question 14 14. Solve the following combination: Question 15 15. While playing with a standard deck of playing cards, what is the probability that Jim's 5th card will be red after selecting 4 cards that were also red and not replacing them? (Hint: There are 52 cards in a deck which contains 26 red and 26 black cards.) Page 4 Question 16 16. Kyle works at a local music store. The store receives a shipment of new CDs in a box. In the shipment, there are 10 country CDs, 5 rock CDs, 12 hip hop CDs, and 3 jazz CDs. What is the probability that the first CD Kyle chooses from the box will be country? Question 17 17. At Peter's Pizzeria, you can create your pizza choosing from 3 toppings, 3 cheese combinations, and 4 sizes. What are the events in this scenario? Question 18 18. Twenty students compete in a school-wide marathon and each student is of comparable running ability. Of the 20 students, 15 were boys and 5 were girls. What is the probability that girls will place 1st, 2nd, and 3rd in the marathon? Question 19 19. Kate and Kyle are playing a game. They must flip a coin and spin a spinner that has 12 equal sections numbered 1 through 12. What is the probability that Kyle will flip a heads and spin the spinner and get an even number? Question 20 20. Jimmy has the letters for the state of MISSISSIPPI written on cards, one letter per card. He turns the cards over and mixes up the order. If he selects one card at a time without replacing the cards, what is the probability that he will spell the word MISS in order? Page 5 Question 21 21. What is a set (S) of a random experiment that includes all possible outcomes of the experiment? Question 22 22. There is a bag of red and blue marbles. If you keep grabbing marbles out of the bag without replacing them until you get a blue marble, is each grab an independent event? Why or why not? Question 23 23. There is a bag of blue and red marbles and a deck of cards. What is the type of probability if one wants to know the chances of pulling out a blue marble and then an ace from the deck of cards? Question 24 24. Calculate 4! (factorial) Question 25 25. All of the letters of MISSISSIPPI are put in a bag. What is the probability of selecting an M and then after not replacing the letter, selecting a P? Page 6 Question 26 26. Solve the following combination: Question 27 27. How would you determine the total number of possible combinations in a situation using the fundamental counting principle? Question 28 28. It's the first day of school and Anne is comparing her class schedule with her friends. Thirty percent of Anne's friends are in Geometry and World History with her. She has 60% of her friends in Geometry, and she has 40% of her friends in World History. What is the probability that one of her friends is in Geometry or World History with Anne? Question 29 29. Jenny has a bowl of M&M's that has 6 brown, 3 green, 4 red, and 12 yellow M&M's. She selects a yellow M&M and does not replace it. What is the probability that her second selection will be a brown M&M? Question 30 30. Using a spinner with 12 equal sections numbered 1 through 12, what is the probability that Jim will spin a number less than 8? ORELA Math: Probability Chapter Exam Instructions Choose your answers to the questions and click 'Next' to see the next set of questions. You can skip questions if you would like and come back to them later with the yellow "Go To First Skipped Question" button. When you have completed the practice exam, a green submit button will appear. Click it to see your results. Good luck! ORELA Mathematics: Practice & Study Guide  /  Math Courses Support
__label__pos
1
#!/usr/local/bin/perl # A bit of an evil hack but it post processes the file ../MINFO which # is generated by `make files` in the top directory. # This script outputs one mega makefile that has no shell stuff or any # funny stuff # $INSTALLTOP="/usr/local/ssl"; $OPTIONS=""; $ssl_version=""; $banner="\t\@echo Building OpenSSL"; open(IN,") { $ssl_version=$1 if (/^VERSION=(.*)$/); $OPTIONS=$1 if (/^OPTIONS=(.*)$/); $INSTALLTOP=$1 if (/^INSTALLTOP=(.*$)/); } close(IN); die "Makefile.ssl is not the toplevel Makefile!\n" if $ssl_version eq ""; $infile="MINFO"; %ops=( "VC-WIN32", "Microsoft Visual C++ [4-6] - Windows NT or 9X", "VC-CE", "Microsoft eMbedded Visual C++ 3.0 - Windows CE ONLY", "VC-NT", "Microsoft Visual C++ [4-6] - Windows NT ONLY", "VC-W31-16", "Microsoft Visual C++ 1.52 - Windows 3.1 - 286", "VC-WIN16", "Alias for VC-W31-32", "VC-W31-32", "Microsoft Visual C++ 1.52 - Windows 3.1 - 386+", "VC-MSDOS","Microsoft Visual C++ 1.52 - MSDOS", "Mingw32", "GNU C++ - Windows NT or 9x", "Mingw32-files", "Create files with DOS copy ...", "BC-NT", "Borland C++ 4.5 - Windows NT", "BC-W31", "Borland C++ 4.5 - Windows 3.1 - PROBABLY NOT WORKING", "BC-MSDOS","Borland C++ 4.5 - MSDOS", "linux-elf","Linux elf", "ultrix-mips","DEC mips ultrix", "FreeBSD","FreeBSD distribution", "OS2-EMX", "EMX GCC OS/2", "default","cc under unix", ); $platform=""; foreach (@ARGV) { if (!&read_options && !defined($ops{$_})) { print STDERR "unknown option - $_\n"; print STDERR "usage: perl mk1mf.pl [options] [system]\n"; print STDERR "\nwhere [system] can be one of the following\n"; foreach $i (sort keys %ops) { printf STDERR "\t%-10s\t%s\n",$i,$ops{$i}; } print STDERR <<"EOF"; and [options] can be one of no-md2 no-md4 no-md5 no-sha no-mdc2 - Skip this digest no-ripemd no-rc2 no-rc4 no-rc5 no-idea no-des - Skip this symetric cipher no-bf no-cast no-aes no-rsa no-dsa no-dh - Skip this public key cipher no-ssl2 no-ssl3 - Skip this version of SSL just-ssl - remove all non-ssl keys/digest no-asm - No x86 asm no-krb5 - No KRB5 no-ec - No EC no-ecdsa - No ECDSA no-ecdh - No ECDH no-engine - No engine no-hw - No hw nasm - Use NASM for x86 asm gaswin - Use GNU as with Mingw32 no-socks - No socket code no-err - No error strings dll/shlib - Build shared libraries (MS) debug - Debug build profile - Profiling build gcc - Use Gcc (unix) Values that can be set TMP=tmpdir OUT=outdir SRC=srcdir BIN=binpath INC=header-outdir CC=C-compiler -L -l - extra library flags (unix) - - extra 'cc' flags, added (MS), or replace (unix) EOF exit(1); } $platform=$_; } foreach (grep(!/^$/, split(/ /, $OPTIONS))) { print STDERR "unknown option - $_\n" if !&read_options; } $no_mdc2=1 if ($no_des); $no_ssl3=1 if ($no_md5 || $no_sha); $no_ssl3=1 if ($no_rsa && $no_dh); $no_ssl2=1 if ($no_md5); $no_ssl2=1 if ($no_rsa); $out_def="out"; $inc_def="outinc"; $tmp_def="tmp"; $mkdir="-mkdir"; ($ssl,$crypto)=("ssl","crypto"); $ranlib="echo ranlib"; $cc=(defined($VARS{'CC'}))?$VARS{'CC'}:'cc'; $src_dir=(defined($VARS{'SRC'}))?$VARS{'SRC'}:'.'; $bin_dir=(defined($VARS{'BIN'}))?$VARS{'BIN'}:''; # $bin_dir.=$o causes a core dump on my sparc :-( $NT=0; push(@INC,"util/pl","pl"); if ($platform eq "VC-MSDOS") { $asmbits=16; $msdos=1; require 'VC-16.pl'; } elsif ($platform eq "VC-W31-16") { $asmbits=16; $msdos=1; $win16=1; require 'VC-16.pl'; } elsif (($platform eq "VC-W31-32") || ($platform eq "VC-WIN16")) { $asmbits=32; $msdos=1; $win16=1; require 'VC-16.pl'; } elsif (($platform eq "VC-WIN32") || ($platform eq "VC-NT")) { $NT = 1 if $platform eq "VC-NT"; require 'VC-32.pl'; } elsif ($platform eq "VC-CE") { require 'VC-CE.pl'; } elsif ($platform eq "Mingw32") { require 'Mingw32.pl'; } elsif ($platform eq "Mingw32-files") { require 'Mingw32f.pl'; } elsif ($platform eq "BC-NT") { $bc=1; require 'BC-32.pl'; } elsif ($platform eq "BC-W31") { $bc=1; $msdos=1; $w16=1; require 'BC-16.pl'; } elsif ($platform eq "BC-Q16") { $msdos=1; $w16=1; $shlib=0; $qw=1; require 'BC-16.pl'; } elsif ($platform eq "BC-MSDOS") { $asmbits=16; $msdos=1; require 'BC-16.pl'; } elsif ($platform eq "FreeBSD") { require 'unix.pl'; $cflags='-DTERMIO -D_ANSI_SOURCE -O2 -fomit-frame-pointer'; } elsif ($platform eq "linux-elf") { require "unix.pl"; require "linux.pl"; $unix=1; } elsif ($platform eq "ultrix-mips") { require "unix.pl"; require "ultrix.pl"; $unix=1; } elsif ($platform eq "OS2-EMX") { $wc=1; require 'OS2-EMX.pl'; } else { require "unix.pl"; $unix=1; $cflags.=' -DTERMIO'; } $out_dir=(defined($VARS{'OUT'}))?$VARS{'OUT'}:$out_def.($debug?".dbg":""); $tmp_dir=(defined($VARS{'TMP'}))?$VARS{'TMP'}:$tmp_def.($debug?".dbg":""); $inc_dir=(defined($VARS{'INC'}))?$VARS{'INC'}:$inc_def; $bin_dir=$bin_dir.$o unless ((substr($bin_dir,-1,1) eq $o) || ($bin_dir eq '')); $cflags.=" -DOPENSSL_NO_IDEA" if $no_idea; $cflags.=" -DOPENSSL_NO_AES" if $no_aes; $cflags.=" -DOPENSSL_NO_RC2" if $no_rc2; $cflags.=" -DOPENSSL_NO_RC4" if $no_rc4; $cflags.=" -DOPENSSL_NO_RC5" if $no_rc5; $cflags.=" -DOPENSSL_NO_MD2" if $no_md2; $cflags.=" -DOPENSSL_NO_MD4" if $no_md4; $cflags.=" -DOPENSSL_NO_MD5" if $no_md5; $cflags.=" -DOPENSSL_NO_SHA" if $no_sha; $cflags.=" -DOPENSSL_NO_SHA1" if $no_sha1; $cflags.=" -DOPENSSL_NO_RIPEMD" if $no_rmd160; $cflags.=" -DOPENSSL_NO_MDC2" if $no_mdc2; $cflags.=" -DOPENSSL_NO_BF" if $no_bf; $cflags.=" -DOPENSSL_NO_CAST" if $no_cast; $cflags.=" -DOPENSSL_NO_DES" if $no_des; $cflags.=" -DOPENSSL_NO_RSA" if $no_rsa; $cflags.=" -DOPENSSL_NO_DSA" if $no_dsa; $cflags.=" -DOPENSSL_NO_DH" if $no_dh; $cflags.=" -DOPENSSL_NO_SOCK" if $no_sock; $cflags.=" -DOPENSSL_NO_SSL2" if $no_ssl2; $cflags.=" -DOPENSSL_NO_SSL3" if $no_ssl3; $cflags.=" -DOPENSSL_NO_ERR" if $no_err; $cflags.=" -DOPENSSL_NO_KRB5" if $no_krb5; $cflags.=" -DOPENSSL_NO_EC" if $no_ec; $cflags.=" -DOPENSSL_NO_ECDSA" if $no_ecdsa; $cflags.=" -DOPENSSL_NO_ECDH" if $no_ecdh; $cflags.=" -DOPENSSL_NO_ENGINE" if $no_engine; $cflags.=" -DOPENSSL_NO_HW" if $no_hw; #$cflags.=" -DRSAref" if $rsaref ne ""; ## if ($unix) ## { $cflags="$c_flags" if ($c_flags ne ""); } ##else { $cflags="$c_flags$cflags" if ($c_flags ne ""); } $ex_libs="$l_flags$ex_libs" if ($l_flags ne ""); %shlib_ex_cflags=("SSL" => " -DOPENSSL_BUILD_SHLIBSSL", "CRYPTO" => " -DOPENSSL_BUILD_SHLIBCRYPTO"); if ($msdos) { $banner ="\t\@echo Make sure you have run 'perl Configure $platform' in the\n"; $banner.="\t\@echo top level directory, if you don't have perl, you will\n"; $banner.="\t\@echo need to probably edit crypto/bn/bn.h, check the\n"; $banner.="\t\@echo documentation for details.\n"; } # have to do this to allow $(CC) under unix $link="$bin_dir$link" if ($link !~ /^\$/); $INSTALLTOP =~ s|/|$o|g; $defs= <<"EOF"; # This makefile has been automatically generated from the OpenSSL distribution. # This single makefile will build the complete OpenSSL distribution and # by default leave the 'intertesting' output files in .${o}out and the stuff # that needs deleting in .${o}tmp. # The file was generated by running 'make makefile.one', which # does a 'make files', which writes all the environment variables from all # the makefiles to the file call MINFO. This file is used by # util${o}mk1mf.pl to generate makefile.one. # The 'makefile per directory' system suites me when developing this # library and also so I can 'distribute' indervidual library sections. # The one monster makefile better suits building in non-unix # environments. EOF if ($platform eq "VC-CE") { $defs.= <<"EOF"; !INCLUDE <\$(WCECOMPAT)/wcedefs.mak> EOF } $defs.= <<"EOF"; INSTALLTOP=$INSTALLTOP # Set your compiler options PLATFORM=$platform CC=$bin_dir${cc} CFLAG=$cflags APP_CFLAG=$app_cflag LIB_CFLAG=$lib_cflag SHLIB_CFLAG=$shl_cflag APP_EX_OBJ=$app_ex_obj SHLIB_EX_OBJ=$shlib_ex_obj # add extra libraries to this define, for solaris -lsocket -lnsl would # be added EX_LIBS=$ex_libs # The OpenSSL directory SRC_D=$src_dir LINK=$link LFLAGS=$lflags RSC=$rsc BN_ASM_OBJ=$bn_asm_obj BN_ASM_SRC=$bn_asm_src BNCO_ASM_OBJ=$bnco_asm_obj BNCO_ASM_SRC=$bnco_asm_src DES_ENC_OBJ=$des_enc_obj DES_ENC_SRC=$des_enc_src BF_ENC_OBJ=$bf_enc_obj BF_ENC_SRC=$bf_enc_src CAST_ENC_OBJ=$cast_enc_obj CAST_ENC_SRC=$cast_enc_src RC4_ENC_OBJ=$rc4_enc_obj RC4_ENC_SRC=$rc4_enc_src RC5_ENC_OBJ=$rc5_enc_obj RC5_ENC_SRC=$rc5_enc_src MD5_ASM_OBJ=$md5_asm_obj MD5_ASM_SRC=$md5_asm_src SHA1_ASM_OBJ=$sha1_asm_obj SHA1_ASM_SRC=$sha1_asm_src RMD160_ASM_OBJ=$rmd160_asm_obj RMD160_ASM_SRC=$rmd160_asm_src # The output directory for everything intersting OUT_D=$out_dir # The output directory for all the temporary muck TMP_D=$tmp_dir # The output directory for the header files INC_D=$inc_dir INCO_D=$inc_dir${o}openssl CP=$cp RM=$rm RANLIB=$ranlib MKDIR=$mkdir MKLIB=$bin_dir$mklib MLFLAGS=$mlflags ASM=$bin_dir$asm ###################################################### # You should not need to touch anything below this point ###################################################### E_EXE=openssl SSL=$ssl CRYPTO=$crypto # BIN_D - Binary output directory # TEST_D - Binary test file output directory # LIB_D - library output directory # Note: if you change these point to different directories then uncomment out # the lines around the 'NB' comment below. # BIN_D=\$(OUT_D) TEST_D=\$(OUT_D) LIB_D=\$(OUT_D) # INCL_D - local library directory # OBJ_D - temp object file directory OBJ_D=\$(TMP_D) INCL_D=\$(TMP_D) O_SSL= \$(LIB_D)$o$plib\$(SSL)$shlibp O_CRYPTO= \$(LIB_D)$o$plib\$(CRYPTO)$shlibp SO_SSL= $plib\$(SSL)$so_shlibp SO_CRYPTO= $plib\$(CRYPTO)$so_shlibp L_SSL= \$(LIB_D)$o$plib\$(SSL)$libp L_CRYPTO= \$(LIB_D)$o$plib\$(CRYPTO)$libp L_LIBS= \$(L_SSL) \$(L_CRYPTO) ###################################################### # Don't touch anything below this point ###################################################### INC=-I\$(INC_D) -I\$(INCL_D) APP_CFLAGS=\$(INC) \$(CFLAG) \$(APP_CFLAG) LIB_CFLAGS=\$(INC) \$(CFLAG) \$(LIB_CFLAG) SHLIB_CFLAGS=\$(INC) \$(CFLAG) \$(LIB_CFLAG) \$(SHLIB_CFLAG) LIBS_DEP=\$(O_CRYPTO) \$(O_SSL) ############################################# EOF $rules=<<"EOF"; all: banner \$(TMP_D) \$(BIN_D) \$(TEST_D) \$(LIB_D) \$(INCO_D) headers lib exe banner: $banner \$(TMP_D): \$(MKDIR) \$(TMP_D) # NB: uncomment out these lines if BIN_D, TEST_D and LIB_D are different #\$(BIN_D): # \$(MKDIR) \$(BIN_D) # #\$(TEST_D): # \$(MKDIR) \$(TEST_D) \$(LIB_D): \$(MKDIR) \$(LIB_D) \$(INCO_D): \$(INC_D) \$(MKDIR) \$(INCO_D) \$(INC_D): \$(MKDIR) \$(INC_D) headers: \$(HEADER) \$(EXHEADER) @ lib: \$(LIBS_DEP) exe: \$(T_EXE) \$(BIN_D)$o\$(E_EXE)$exep install: \$(MKDIR) \$(INSTALLTOP) \$(MKDIR) \$(INSTALLTOP)${o}bin \$(MKDIR) \$(INSTALLTOP)${o}include \$(MKDIR) \$(INSTALLTOP)${o}include${o}openssl \$(MKDIR) \$(INSTALLTOP)${o}lib \$(CP) \$(INCO_D)${o}*.\[ch\] \$(INSTALLTOP)${o}include${o}openssl \$(CP) \$(BIN_D)$o\$(E_EXE)$exep \$(INSTALLTOP)${o}bin \$(CP) \$(O_SSL) \$(INSTALLTOP)${o}lib \$(CP) \$(O_CRYPTO) \$(INSTALLTOP)${o}lib clean: \$(RM) \$(TMP_D)$o*.* vclean: \$(RM) \$(TMP_D)$o*.* \$(RM) \$(OUT_D)$o*.* EOF my $platform_cpp_symbol = "MK1MF_PLATFORM_$platform"; $platform_cpp_symbol =~ s/-/_/g; if (open(IN,"crypto/buildinf.h")) { # Remove entry for this platform in existing file buildinf.h. my $old_buildinf_h = ""; while () { if (/^\#ifdef $platform_cpp_symbol$/) { while () { last if (/^\#endif/); } } else { $old_buildinf_h .= $_; } } close(IN); open(OUT,">crypto/buildinf.h") || die "Can't open buildinf.h"; print OUT $old_buildinf_h; close(OUT); } open (OUT,">>crypto/buildinf.h") || die "Can't open buildinf.h"; printf OUT <; for (;;) { chop; ($key,$val)=/^([^=]+)=(.*)/; if ($key eq "RELATIVE_DIRECTORY") { if ($lib ne "") { $uc=$lib; $uc =~ s/^lib(.*)\.a/$1/; $uc =~ tr/a-z/A-Z/; $lib_nam{$uc}=$uc; $lib_obj{$uc}.=$libobj." "; } last if ($val eq "FINISHED"); $lib=""; $libobj=""; $dir=$val; } if ($key eq "TEST") { $test.=&var_add($dir,$val); } if (($key eq "PROGS") || ($key eq "E_OBJ")) { $e_exe.=&var_add($dir,$val); } if ($key eq "LIB") { $lib=$val; $lib =~ s/^.*\/([^\/]+)$/$1/; } if ($key eq "EXHEADER") { $exheader.=&var_add($dir,$val); } if ($key eq "HEADER") { $header.=&var_add($dir,$val); } if ($key eq "LIBOBJ") { $libobj=&var_add($dir,$val); } if (!($_=)) { $_="RELATIVE_DIRECTORY=FINISHED\n"; } } close(IN); # Strip of trailing ' ' foreach (keys %lib_obj) { $lib_obj{$_}=&clean_up_ws($lib_obj{$_}); } $test=&clean_up_ws($test); $e_exe=&clean_up_ws($e_exe); $exheader=&clean_up_ws($exheader); $header=&clean_up_ws($header); # First we strip the exheaders from the headers list foreach (split(/\s+/,$exheader)){ $h{$_}=1; } foreach (split(/\s+/,$header)) { $h.=$_." " unless $h{$_}; } chop($h); $header=$h; $defs.=&do_defs("HEADER",$header,"\$(INCL_D)",".h"); $rules.=&do_copy_rule("\$(INCL_D)",$header,".h"); $defs.=&do_defs("EXHEADER",$exheader,"\$(INCO_D)",".h"); $rules.=&do_copy_rule("\$(INCO_D)",$exheader,".h"); $defs.=&do_defs("T_OBJ",$test,"\$(OBJ_D)",$obj); $rules.=&do_compile_rule("\$(OBJ_D)",$test,"\$(APP_CFLAGS)"); $defs.=&do_defs("E_OBJ",$e_exe,"\$(OBJ_D)",$obj); $rules.=&do_compile_rule("\$(OBJ_D)",$e_exe,'-DMONOLITH $(APP_CFLAGS)'); foreach (values %lib_nam) { $lib_obj=$lib_obj{$_}; local($slib)=$shlib; if (($_ eq "SSL") && $no_ssl2 && $no_ssl3) { $rules.="\$(O_SSL):\n\n"; next; } if (($bn_asm_obj ne "") && ($_ eq "CRYPTO")) { $lib_obj =~ s/\s\S*\/bn_asm\S*/ \$(BN_ASM_OBJ)/; $rules.=&do_asm_rule($bn_asm_obj,$bn_asm_src); } if (($bnco_asm_obj ne "") && ($_ eq "CRYPTO")) { $lib_obj .= "\$(BNCO_ASM_OBJ)"; $rules.=&do_asm_rule($bnco_asm_obj,$bnco_asm_src); } if (($des_enc_obj ne "") && ($_ eq "CRYPTO")) { $lib_obj =~ s/\s\S*des_enc\S*/ \$(DES_ENC_OBJ)/; $lib_obj =~ s/\s\S*\/fcrypt_b\S*\s*/ /; $rules.=&do_asm_rule($des_enc_obj,$des_enc_src); } if (($bf_enc_obj ne "") && ($_ eq "CRYPTO")) { $lib_obj =~ s/\s\S*\/bf_enc\S*/ \$(BF_ENC_OBJ)/; $rules.=&do_asm_rule($bf_enc_obj,$bf_enc_src); } if (($cast_enc_obj ne "") && ($_ eq "CRYPTO")) { $lib_obj =~ s/(\s\S*\/c_enc\S*)/ \$(CAST_ENC_OBJ)/; $rules.=&do_asm_rule($cast_enc_obj,$cast_enc_src); } if (($rc4_enc_obj ne "") && ($_ eq "CRYPTO")) { $lib_obj =~ s/\s\S*\/rc4_enc\S*/ \$(RC4_ENC_OBJ)/; $rules.=&do_asm_rule($rc4_enc_obj,$rc4_enc_src); } if (($rc5_enc_obj ne "") && ($_ eq "CRYPTO")) { $lib_obj =~ s/\s\S*\/rc5_enc\S*/ \$(RC5_ENC_OBJ)/; $rules.=&do_asm_rule($rc5_enc_obj,$rc5_enc_src); } if (($md5_asm_obj ne "") && ($_ eq "CRYPTO")) { $lib_obj =~ s/\s(\S*\/md5_dgst\S*)/ $1 \$(MD5_ASM_OBJ)/; $rules.=&do_asm_rule($md5_asm_obj,$md5_asm_src); } if (($sha1_asm_obj ne "") && ($_ eq "CRYPTO")) { $lib_obj =~ s/\s(\S*\/sha1dgst\S*)/ $1 \$(SHA1_ASM_OBJ)/; $rules.=&do_asm_rule($sha1_asm_obj,$sha1_asm_src); } if (($rmd160_asm_obj ne "") && ($_ eq "CRYPTO")) { $lib_obj =~ s/\s(\S*\/rmd_dgst\S*)/ $1 \$(RMD160_ASM_OBJ)/; $rules.=&do_asm_rule($rmd160_asm_obj,$rmd160_asm_src); } $defs.=&do_defs(${_}."OBJ",$lib_obj,"\$(OBJ_D)",$obj); $lib=($slib)?" \$(SHLIB_CFLAGS)".$shlib_ex_cflags{$_}:" \$(LIB_CFLAGS)"; $rules.=&do_compile_rule("\$(OBJ_D)",$lib_obj{$_},$lib); } # hack to add version info on MSVC if (($platform eq "VC-WIN32") || ($platform eq "VC-NT")) { $rules.= <<"EOF"; \$(OBJ_D)\\\$(CRYPTO).res: ms\\version32.rc \$(RSC) /fo"\$(OBJ_D)\\\$(CRYPTO).res" /d CRYPTO ms\\version32.rc \$(OBJ_D)\\\$(SSL).res: ms\\version32.rc \$(RSC) /fo"\$(OBJ_D)\\\$(SSL).res" /d SSL ms\\version32.rc EOF } $defs.=&do_defs("T_EXE",$test,"\$(TEST_D)",$exep); foreach (split(/\s+/,$test)) { $t=&bname($_); $tt="\$(OBJ_D)${o}$t${obj}"; $rules.=&do_link_rule("\$(TEST_D)$o$t$exep",$tt,"\$(LIBS_DEP)","\$(L_LIBS) \$(EX_LIBS)"); } $rules.= &do_lib_rule("\$(SSLOBJ)","\$(O_SSL)",$ssl,$shlib,"\$(SO_SSL)"); $rules.= &do_lib_rule("\$(CRYPTOOBJ)","\$(O_CRYPTO)",$crypto,$shlib,"\$(SO_CRYPTO)"); $rules.=&do_link_rule("\$(BIN_D)$o\$(E_EXE)$exep","\$(E_OBJ)","\$(LIBS_DEP)","\$(L_LIBS) \$(EX_LIBS)"); print $defs; if ($platform eq "linux-elf") { print <<"EOF"; # Generate perlasm output files %.cpp: (cd \$(\@D)/..; PERL=perl make -f Makefile.ssl asm/\$(\@F)) EOF } print "###################################################################\n"; print $rules; ############################################### # strip off any trailing .[och] and append the relative directory # also remembering to do nothing if we are in one of the dropped # directories sub var_add { local($dir,$val)=@_; local(@a,$_,$ret); return("") if $no_engine && $dir =~ /\/engine/; return("") if $no_hw && $dir =~ /\/hw/; return("") if $no_idea && $dir =~ /\/idea/; return("") if $no_aes && $dir =~ /\/aes/; return("") if $no_rc2 && $dir =~ /\/rc2/; return("") if $no_rc4 && $dir =~ /\/rc4/; return("") if $no_rc5 && $dir =~ /\/rc5/; return("") if $no_rsa && $dir =~ /\/rsa/; return("") if $no_rsa && $dir =~ /^rsaref/; return("") if $no_dsa && $dir =~ /\/dsa/; return("") if $no_dh && $dir =~ /\/dh/; if ($no_des && $dir =~ /\/des/) { if ($val =~ /read_pwd/) { return("$dir/read_pwd "); } else { return(""); } } return("") if $no_mdc2 && $dir =~ /\/mdc2/; return("") if $no_sock && $dir =~ /\/proxy/; return("") if $no_bf && $dir =~ /\/bf/; return("") if $no_cast && $dir =~ /\/cast/; $val =~ s/^\s*(.*)\s*$/$1/; @a=split(/\s+/,$val); grep(s/\.[och]$//,@a); @a=grep(!/^e_.*_3d$/,@a) if $no_des; @a=grep(!/^e_.*_d$/,@a) if $no_des; @a=grep(!/^e_.*_ae$/,@a) if $no_idea; @a=grep(!/^e_.*_i$/,@a) if $no_aes; @a=grep(!/^e_.*_r2$/,@a) if $no_rc2; @a=grep(!/^e_.*_r5$/,@a) if $no_rc5; @a=grep(!/^e_.*_bf$/,@a) if $no_bf; @a=grep(!/^e_.*_c$/,@a) if $no_cast; @a=grep(!/^e_rc4$/,@a) if $no_rc4; @a=grep(!/(^s2_)|(^s23_)/,@a) if $no_ssl2; @a=grep(!/(^s3_)|(^s23_)/,@a) if $no_ssl3; @a=grep(!/(_sock$)|(_acpt$)|(_conn$)|(^pxy_)/,@a) if $no_sock; @a=grep(!/(^md2)|(_md2$)/,@a) if $no_md2; @a=grep(!/(^md4)|(_md4$)/,@a) if $no_md4; @a=grep(!/(^md5)|(_md5$)/,@a) if $no_md5; @a=grep(!/(rmd)|(ripemd)/,@a) if $no_rmd160; @a=grep(!/(^d2i_r_)|(^i2d_r_)/,@a) if $no_rsa; @a=grep(!/(^p_open$)|(^p_seal$)/,@a) if $no_rsa; @a=grep(!/(^pem_seal$)/,@a) if $no_rsa; @a=grep(!/(m_dss$)|(m_dss1$)/,@a) if $no_dsa; @a=grep(!/(^d2i_s_)|(^i2d_s_)|(_dsap$)/,@a) if $no_dsa; @a=grep(!/^n_pkey$/,@a) if $no_rsa || $no_rc4; @a=grep(!/_dhp$/,@a) if $no_dh; @a=grep(!/(^sha[^1])|(_sha$)|(m_dss$)/,@a) if $no_sha; @a=grep(!/(^sha1)|(_sha1$)|(m_dss1$)/,@a) if $no_sha1; @a=grep(!/_mdc2$/,@a) if $no_mdc2; @a=grep(!/^engine$/,@a) if $no_engine; @a=grep(!/^hw$/,@a) if $no_hw; @a=grep(!/(^rsa$)|(^genrsa$)/,@a) if $no_rsa; @a=grep(!/(^dsa$)|(^gendsa$)|(^dsaparam$)/,@a) if $no_dsa; @a=grep(!/^gendsa$/,@a) if $no_sha1; @a=grep(!/(^dh$)|(^gendh$)/,@a) if $no_dh; @a=grep(!/(^dh)|(_sha1$)|(m_dss1$)/,@a) if $no_sha1; grep($_="$dir/$_",@a); @a=grep(!/(^|\/)s_/,@a) if $no_sock; @a=grep(!/(^|\/)bio_sock/,@a) if $no_sock; $ret=join(' ',@a)." "; return($ret); } # change things so that each 'token' is only separated by one space sub clean_up_ws { local($w)=@_; $w =~ s/^\s*(.*)\s*$/$1/; $w =~ s/\s+/ /g; return($w); } sub do_defs { local($var,$files,$location,$postfix)=@_; local($_,$ret,$pf); local(*OUT,$tmp,$t); $files =~ s/\//$o/g if $o ne '/'; $ret="$var="; $n=1; $Vars{$var}.=""; foreach (split(/ /,$files)) { $orig=$_; $_=&bname($_) unless /^\$/; if ($n++ == 2) { $n=0; $ret.="\\\n\t"; } if (($_ =~ /bss_file/) && ($postfix eq ".h")) { $pf=".c"; } else { $pf=$postfix; } if ($_ =~ /BN_ASM/) { $t="$_ "; } elsif ($_ =~ /BNCO_ASM/){ $t="$_ "; } elsif ($_ =~ /DES_ENC/) { $t="$_ "; } elsif ($_ =~ /BF_ENC/) { $t="$_ "; } elsif ($_ =~ /CAST_ENC/){ $t="$_ "; } elsif ($_ =~ /RC4_ENC/) { $t="$_ "; } elsif ($_ =~ /RC5_ENC/) { $t="$_ "; } elsif ($_ =~ /MD5_ASM/) { $t="$_ "; } elsif ($_ =~ /SHA1_ASM/){ $t="$_ "; } elsif ($_ =~ /RMD160_ASM/){ $t="$_ "; } else { $t="$location${o}$_$pf "; } $Vars{$var}.="$t "; $ret.=$t; } # hack to add version info on MSVC if ($shlib && ($platform eq "VC-WIN32") || ($platform eq "VC-NT")) { if ($var eq "CRYPTOOBJ") { $ret.="\$(OBJ_D)\\\$(CRYPTO).res "; } elsif ($var eq "SSLOBJ") { $ret.="\$(OBJ_D)\\\$(SSL).res "; } } chop($ret); $ret.="\n\n"; return($ret); } # return the name with the leading path removed sub bname { local($ret)=@_; $ret =~ s/^.*[\\\/]([^\\\/]+)$/$1/; return($ret); } ############################################################## # do a rule for each file that says 'compile' to new direcory # compile the files in '$files' into $to sub do_compile_rule { local($to,$files,$ex)=@_; local($ret,$_,$n); $files =~ s/\//$o/g if $o ne '/'; foreach (split(/\s+/,$files)) { $n=&bname($_); $ret.=&cc_compile_target("$to${o}$n$obj","${_}.c",$ex) } return($ret); } ############################################################## # do a rule for each file that says 'compile' to new direcory sub cc_compile_target { local($target,$source,$ex_flags)=@_; local($ret); $ex_flags.=" -DMK1MF_BUILD -D$platform_cpp_symbol" if ($source =~ /cversion/); $target =~ s/\//$o/g if $o ne "/"; $source =~ s/\//$o/g if $o ne "/"; $ret ="$target: \$(SRC_D)$o$source\n\t"; $ret.="\$(CC) ${ofile}$target $ex_flags -c \$(SRC_D)$o$source\n\n"; return($ret); } ############################################################## sub do_asm_rule { local($target,$src)=@_; local($ret,@s,@t,$i); $target =~ s/\//$o/g if $o ne "/"; $src =~ s/\//$o/g if $o ne "/"; @s=split(/\s+/,$src); @t=split(/\s+/,$target); for ($i=0; $i<=$#s; $i++) { $ret.="$t[$i]: $s[$i]\n"; $ret.="\t\$(ASM) $afile$t[$i] \$(SRC_D)$o$s[$i]\n\n"; } return($ret); } sub do_shlib_rule { local($n,$def)=@_; local($ret,$nn); local($t); ($nn=$n) =~ tr/a-z/A-Z/; $ret.="$n.dll: \$(${nn}OBJ)\n"; if ($vc && $w32) { $ret.="\t\$(MKSHLIB) $efile$n.dll $def @<<\n \$(${nn}OBJ_F)\n<<\n"; } $ret.="\n"; return($ret); } # do a rule for each file that says 'copy' to new direcory on change sub do_copy_rule { local($to,$files,$p)=@_; local($ret,$_,$n,$pp); $files =~ s/\//$o/g if $o ne '/'; foreach (split(/\s+/,$files)) { $n=&bname($_); if ($n =~ /bss_file/) { $pp=".c"; } else { $pp=$p; } $ret.="$to${o}$n$pp: \$(SRC_D)$o$_$pp\n\t\$(CP) \$(SRC_D)$o$_$pp $to${o}$n$pp\n\n"; } return($ret); } sub read_options { if (/^no-rc2$/) { $no_rc2=1; } elsif (/^no-rc4$/) { $no_rc4=1; } elsif (/^no-rc5$/) { $no_rc5=1; } elsif (/^no-idea$/) { $no_idea=1; } elsif (/^no-aes$/) { $no_aes=1; } elsif (/^no-des$/) { $no_des=1; } elsif (/^no-bf$/) { $no_bf=1; } elsif (/^no-cast$/) { $no_cast=1; } elsif (/^no-md2$/) { $no_md2=1; } elsif (/^no-md4$/) { $no_md4=1; } elsif (/^no-md5$/) { $no_md5=1; } elsif (/^no-sha$/) { $no_sha=1; } elsif (/^no-sha1$/) { $no_sha1=1; } elsif (/^no-ripemd$/) { $no_ripemd=1; } elsif (/^no-mdc2$/) { $no_mdc2=1; } elsif (/^no-patents$/) { $no_rc2=$no_rc4=$no_rc5=$no_idea=$no_rsa=1; } elsif (/^no-rsa$/) { $no_rsa=1; } elsif (/^no-dsa$/) { $no_dsa=1; } elsif (/^no-dh$/) { $no_dh=1; } elsif (/^no-hmac$/) { $no_hmac=1; } elsif (/^no-aes$/) { $no_aes=1; } elsif (/^no-asm$/) { $no_asm=1; } elsif (/^nasm$/) { $nasm=1; } elsif (/^gaswin$/) { $gaswin=1; } elsif (/^no-ssl2$/) { $no_ssl2=1; } elsif (/^no-ssl3$/) { $no_ssl3=1; } elsif (/^no-err$/) { $no_err=1; } elsif (/^no-sock$/) { $no_sock=1; } elsif (/^no-krb5$/) { $no_krb5=1; } elsif (/^no-ec$/) { $no_ec=1; } elsif (/^no-ecdsa$/) { $no_ecdsa=1; } elsif (/^no-ecdh$/) { $no_ecdh=1; } elsif (/^no-engine$/) { $no_engine=1; } elsif (/^no-hw$/) { $no_hw=1; } elsif (/^just-ssl$/) { $no_rc2=$no_idea=$no_des=$no_bf=$no_cast=1; $no_md2=$no_sha=$no_mdc2=$no_dsa=$no_dh=1; $no_ssl2=$no_err=$no_rmd160=$no_rc5=1; $no_aes=1; } elsif (/^rsaref$/) { } elsif (/^gcc$/) { $gcc=1; } elsif (/^debug$/) { $debug=1; } elsif (/^profile$/) { $profile=1; } elsif (/^shlib$/) { $shlib=1; } elsif (/^dll$/) { $shlib=1; } elsif (/^shared$/) { } # We just need to ignore it for now... elsif (/^([^=]*)=(.*)$/){ $VARS{$1}=$2; } elsif (/^-[lL].*$/) { $l_flags.="$_ "; } elsif ((!/^-help/) && (!/^-h/) && (!/^-\?/) && /^-.*$/) { $c_flags.="$_ "; } else { return(0); } return(1); }
__label__pos
0.986488
physicscatalyst.com logo Class 10 Maths Chapter 8: Introduction to Trigonometry Ex 8.1 In this page we have NCERT Solutions for Class 10 Maths Chapter 8: Introduction to Trigonometry for EXERCISE 8.1 . Hope you like them and do not forget to like , social_share and comment at the end of the page. Question 1 In Δ ABC, right-angled at B, AB = 24 cm, BC = 7 cm. Determine: (i) Sin A, cos A (ii) Sin C, cos C Solution In Δ ABC, right-angled at B ,using Pythagoras theorem we have AC= AB2 +BC2  = 576 + 49 = 625 Or AC=25 ( taking positive value only) Now (i) In a right angle triangle ABC where B=90° , NCERT Solutions for Class 10 Maths Chapter 8: Introduction to Trigonometry Ex 8.1 Sin A = BC/AC = 7/25 CosA = AB/AC = 24/25 (ii) Sin C = AB/AC = 24/25 Cos C = BC/AC =7/25 Question 2 In below find tan P – cot R Solution Again consider the above figure . Now by Pythagoras theorem PQ2 + QR2  =PR2 QR=5 Now tan P = Perp/Base = 5/12 Cot R = Base/Perm = 5/12 So tan P – cot R=0 Question 3. If $sin A= \frac{3}{4}$. Calculate cos A and tan A. Solution Given $sin A= \frac{3}{4}$ Or P/H=3/4 Let  P=3k and H=4k Now By Pythagoras theorem P2 + B2 =H2 9k2 + B2 =16k2 Or  $ B =+ k \sqrt 7 $ Now $\cos A = {B \over H} = {{\sqrt 7 } \over 4}$ Now $\tan A = {{\sin A} \over {\cos A}} = {3 \over {\sqrt 7 }}$ Question 4 Given 15 cot A = 8, find sin A and sec A. Solution 4 $Cot A =\frac{8}{15}$ Or ${B \over P} = {8 \over {15}}$ Let B=8K and P=15k So in a right angle triangle with angle A P2 + B2 =H2 Or H=17K Sin A = P/H = 15/17 Sec A = H/B = 17/8 Question 5 Given sec θ =13/12. Calculate all other trigonometric ratios. Solution 5 Given sec θ=13/12 Or H/B=13/12 let H=13K ,B=12K So in a right angle triangle with angle A P2 + B2 =H2 P=5k sin θ = P/H = 5/13 Cos θ = B/H = 12/13 Tan θ = P/B = 5/12 Cosec θ= 1/sin θ = 13/5 cot θ = 1/tan θ = 12/5 Question 6 If ∠ A and ∠ B are acute angles such that cos A = cos B, then show that ∠ A = ∠ B. Solution 6 In a triangle Cos A =cos B AC/AB= BC/AB ⇒AC=BC ⇒Angle A and Angle B Question 7 If cot θ =7/8 evaluate (i) ${{\left( {1 + \sin\theta } \right)\left( {1 - \sin\theta } \right)} \over {\left( {1 + \cos\theta } \right)\left( {1 - \cos\theta } \right)}}$ (ii) ${\cot ^2}\theta $ Solution Given Given cot θ=7/8 Or B/P=7/8 let B=7K ,P=8K So in a right angle triangle with angle θ P2 + B2 =H2 $H = k\sqrt {113} $ $\sin \theta  = {P \over H} = {8 \over {\sqrt {113} }}$ $\cos \theta  = {B \over H} = {7 \over {\sqrt {113} }}$ (i) ${{\left( {1 + sin\theta } \right)\left( {1 - sin\theta } \right)} \over {\left( {1 + cos\theta } \right)\left( {1 - cos\theta } \right)}}$ $ = {{1 - \sin^2 \theta } \over {1 - \cos^2 \theta }}$ ${{1 - {{64} \over {113}}} \over {1 - {{49} \over {113}}}} = {{49} \over {64}}$ (ii)Cot2 θ=(cot θ)2= 49/64 Question 8 If 3 cot A = 4, check whether below is true or not ${{1 - \tan^2 A} \over {1 + \tan^2 A}} = \cos^2 A - \sin^2 A$ Solution 8 Given Cot A=4/3 Or B/P=4/3 Let B=4k and P=3k So in a right angle triangle with angle A P2 + B2 =H2 H=5k Now tan A=1/cot A=3/4 Cos A=B/H=4/5 Sin A=P/H=3/5 Let us take the LHS ${{1 - ta{n^2}A} \over {1 + ta{n^2}A}} = {{1 - {{\left( {{3 \over 4}} \right)}^2}} \over {1 + {{\left( {{3 \over 4}} \right)}^2}}} = {7 \over {25}}$ RHS=cos2 A -  sin2A= 7/25 So LHS=RHS,so the statement is true Question 9 In triangle ABC, right-angled at B, if $\tan A = {1 \over {\sqrt 3 }}$ Find the value of: (i) sin A cos C + cos A sin C (ii) cos A cos C – sin A sin C Solution $\tan A = {1 \over {\sqrt 3 }}$ ${P \over B} = {1 \over {\sqrt 3 }}$ Let $P = k$ and $B = k\sqrt 3 $ Now by Pythagoras theorem P2 + B2 =H2 H=2k (i) $\sin A \cos C + \cos A \sin C = \left( {{P \over H}} \right)\left( {{B \over H}} \right) + \left( {{B \over H}} \right)\left( {{P \over H}} \right)$ $ = \left( {{{BC} \over {AC}}} \right)\left( {{{BC} \over {AC}}} \right) + \left( {{{AB} \over {AC}}} \right)\left( {{{AB} \over {AC}}} \right)$ $ = {{{k^2}} \over {4{k^2}}} + {{3{k^2}} \over {4{k^2}}} = 1$ (ii) $\cos A \cos C - \sin A \sin C = \left( {{P \over H}} \right)\left( {{P \over H}} \right) + \left( {{B \over H}} \right)\left( {{B \over H}} \right)$ $ = \left( {{{BC} \over {AC}}} \right)\left( {{{AB} \over {AC}}} \right) - \left( {{{AB} \over {AC}}} \right)\left( {{{BC} \over {AC}}} \right)$ $=0$ Question 10 In Δ PQR, right-angled at Q, PR + QR = 25 cm and PQ = 5 cm. Determine the values of sin P, cos P and tan P. Solution Let QR=x andPR=y Then x+y=25 y=25-x Now by Pythagorus theorem x2  + 25= y2 x2  + 25==(25-x)2 Solving it ,we get X=12 cm Then y=25-12=13 cm Now Sin P= 12/13 Cos P=5/13 Tan P=12/5 Question 11 State whether the following are true or false. Justify your answer. (i) The value of tan A is always less than 1. (ii) sec A =12/5 for some value of angle A. (iii) cos A is the abbreviation used for the cosecant of angle A. (iv) cot A is the product of cot and A. (v) sin θ =4/3 for some angle Solution  1. False. The value of  tan A increase from 0 to ∞. 2. True. The value of sec A increase from 1 to ∞. 3. False .Cos A is the abbreviation used for the cosine of angle A 4. False .cot A is one symbol. We cannot separate it 5. False. The value of sin θ always lies between 0 and 0 and 4/3 > 1 Download Class 10 Maths Chapter 8: Introduction to Trigonometry Ex 8.1 as pdf link to this page by copying the following text Also Read Go back to Class 10 Main Page using below links Class 10 Maths Class 10 Science Practice Question Question 1 What is $1 - \sqrt {3}$ ? A) Non terminating repeating B) Non terminating non repeating C) Terminating D) None of the above Question 2 The volume of the largest right circular cone that can be cut out from a cube of edge 4.2 cm is? A) 19.4 cm3 B) 12 cm3 C) 78.6 cm3 D) 58.2 cm3 Question 3 The sum of the first three terms of an AP is 33. If the product of the first and the third term exceeds the second term by 29, the AP is ? A) 2 ,21,11 B) 1,10,19 C) -1 ,8,17 D) 2 ,11,20
__label__pos
0.999993
Partition and scale in Azure Cosmos DB Azure Cosmos DB is a globally distributed, multimodel database service designed to help you achieve fast, predictable performance. It scales seamlessly along with your application as it grows. This article provides an overview of how partitioning works for all the data models in Azure Cosmos DB. It also describes how you can configure Azure Cosmos DB containers to effectively scale your applications. Partitioning and partition keys are discussed in this Azure Friday video with Scott Hanselman and Azure Cosmos DB Principal Engineering Manager, Shireesh Thota: Partitioning in Azure Cosmos DB In Azure Cosmos DB, you can store and query schema-less data with order-of-millisecond response times at any scale. Azure Cosmos DB provides containers for storing data called collections (for documents), graphs, or tables. Containers are logical resources and can span one or more physical partitions or servers. The number of partitions is determined by Azure Cosmos DB based on the storage size and the provisioned throughput of the container. Every partition in Azure Cosmos DB has a fixed amount of SSD-backed storage associated with it and is replicated for high availability. Partition management is fully managed by Azure Cosmos DB, and you don't have to write complex code or manage your partitions. Azure Cosmos DB containers are unlimited in terms of storage and throughput. Resource partitioning Partitioning is transparent to your application. Azure Cosmos DB supports fast reads and writes, queries, transactional logic, consistency levels, and fine-grained access control via methods/APIs to a single container resource. The service handles distributing data across partitions and routing query requests to the right partition. How does partitioning work? Each item must have a partition key and a row key, which uniquely identify it. Your partition key acts as a logical partition for your data and provides Azure Cosmos DB with a natural boundary for distributing data across partitions. In brief, here's how partitioning works in Azure Cosmos DB: • You provision a Azure Cosmos DB container with T requests/s throughput. • Behind the scenes, Azure Cosmos DB provisions partitions needed to serve T requests/s. If T is higher than the maximum throughput per partition t, then Azure Cosmos DB provisions N = T/t partitions. • Azure Cosmos DB allocates the key space of partition key hashes evenly across the N partitions. So, each partition (physical partition) hosts 1-N partition key values (logical partitions). • When a physical partition p reaches its storage limit, Azure Cosmos DB seamlessly splits p into two new partitions, p1 and p2. It distributes values corresponding to roughly half the keys to each of the partitions. This split operation is invisible to your application. • Similarly, when you provision throughput higher than t*N, Azure Cosmos DB splits one or more of your partitions to support the higher throughput. The semantics for partition keys are slightly different to match the semantics of each API, as shown in the following table: API Partition key Row key Azure Cosmos DB Custom partition key path Fixed id MongoDB Custom shared key Fixed _id Graph Custom partition key property Fixed id Table Fixed PartitionKey Fixed RowKey Azure Cosmos DB uses hash-based partitioning. When you write an item, Azure Cosmos DB hashes the partition key value and uses the hashed result to determine which partition to store the item in. Azure Cosmos DB stores all items with the same partition key in the same physical partition. The choice of the partition key is an important decision that you have to make at design time. You must pick a property name that has a wide range of values and has even access patterns. Note It's a best practice to have a partition key with many distinct values (hundreds to thousands at a minimum). Azure Cosmos DB containers can be created as fixed or unlimited. Fixed-size containers have a maximum limit of 10 GB and 10,000 RU/s throughput. Some APIs allow the partition key to be omitted for fixed-size containers. To create a container as unlimited, you must specify a minimum throughput of 2,500 RU/s. It is a good idea to check how your data is distributed in partitions. To check this in portal, go to your Azure Cosmos DB account and click on Metrics in Monitoring section and then on right pane click on storage tab to see how your data is partitioned in different physical partition. Resource partitioning The left image shows the result of a bad partition key and the right image shows the result of a good partition key. In left image, you can see the data is not evenly distributed among partitions. You should strive to distribute your data so your graph looks similar to right image. Partitioning and provisioned throughput Azure Cosmos DB is designed for predictable performance. When you create a container, you reserve throughput in terms of request units (RU) per second. Each request is assigned a RU charge that is proportionate to the amount of system resources like CPU, memory, and IO consumed by the operation. A read of a 1-KB document with session consistency consumes 1 RU. A read is 1 RU regardless of the number of items stored or the number of concurrent requests running at the same time. Larger items require higher RUs depending on the size. If you know the size of your entities and the number of reads you need to support for your application, you can provision the exact amount of throughput required for your application's read needs. Note To achieve the full throughput of the container, you must choose a partition key that allows you to evenly distribute requests among some distinct partition key values. Work with the Azure Cosmos DB APIs You can use the Azure portal or Azure CLI to create containers and scale them at any time. This section shows how to create containers and specify the throughput and partition key definition in each of the supported APIs. Azure Cosmos DB API The following sample shows how to create a container (collection) by using the Azure Cosmos DB API. DocumentClient client = new DocumentClient(new Uri(endpoint), authKey); await client.CreateDatabaseAsync(new Database { Id = "db" }); DocumentCollection myCollection = new DocumentCollection(); myCollection.Id = "coll"; myCollection.PartitionKey.Paths.Add("/deviceId"); await client.CreateDocumentCollectionAsync( UriFactory.CreateDatabaseUri("db"), myCollection, new RequestOptions { OfferThroughput = 20000 }); You can read an item (document) by using the GET method in the REST API or by using ReadDocumentAsync in one of the SDKs. // Read document. Needs the partition key and the ID to be specified DeviceReading document = await client.ReadDocumentAsync<DeviceReading>( UriFactory.CreateDocumentUri("db", "coll", "XMS-001-FE24C"), new RequestOptions { PartitionKey = new PartitionKey("XMS-0001") }); MongoDB API With the MongoDB API, you can create a sharded collection through your favorite tool, driver, or SDK. In this example, we use the Mongo Shell for the collection creation. In the Mongo Shell: db.runCommand( { shardCollection: "admin.people", key: { region: "hashed" } } ) Results: { "_t" : "ShardCollectionResponse", "ok" : 1, "collectionsharded" : "admin.people" } Table API With the Table API, you specify the throughput for tables in the appSettings configuration for your application. <configuration> <appSettings> <!--Table creation options --> <add key="TableThroughput" value="700"/> </appSettings> </configuration> Then you create a table by using the Azure Table storage SDK. The partition key is implicitly created as the PartitionKey value. CloudTableClient tableClient = storageAccount.CreateCloudTableClient(); CloudTable table = tableClient.GetTableReference("people"); table.CreateIfNotExists(); You can retrieve a single entity by using the following snippet: // Create a retrieve operation that takes a customer entity. TableOperation retrieveOperation = TableOperation.Retrieve<CustomerEntity>("Smith", "Ben"); // Execute the retrieve operation. TableResult retrievedResult = table.Execute(retrieveOperation); For more information, see Develop with the Table API. Graph API With the Graph API, you must use the Azure portal or Azure CLI to create containers. Alternatively, because Azure Cosmos DB is multimodel, you can use one of the other models to create and scale your graph container. You can read any vertex or edge by using the partition key and ID in Gremlin. For example, for a graph with region ("USA") as the partition key and "Seattle" as the row key, you can find a vertex by using the following syntax: g.V(['USA', 'Seattle']) You can reference an edge by using the partition key and the row key. g.E(['USA', 'I5']) For more information, see Gremlin support for Azure Cosmos DB. Design for partitioning To scale effectively with Azure Cosmos DB, you need to pick a good partition key when you create your container. There are two main considerations for choosing a partition key: • Boundary for query and transactions. Your choice of partition key should balance the need to enable the use of transactions against the requirement to distribute your entities across multiple partition keys to ensure a scalable solution. At one extreme, you can set the same partition key for all your items, but this option might limit the scalability of your solution. At the other extreme, you can assign a unique partition key for each item. This choice is highly scalable, but it prevents you from using cross-document transactions via stored procedures and triggers. An ideal partition key enables you to use efficient queries and has sufficient cardinality to ensure your solution is scalable. • No storage and performance bottlenecks. It's important to pick a property that allows writes to be distributed across various distinct values. Requests to the same partition key can't exceed the throughput of a single partition and are throttled. So it's important to pick a partition key that doesn't result in "hot spots" within your application. Because all the data for a single partition key must be stored within a partition, you should avoid partition keys that have high volumes of data for the same value. Let's look at a few real-world scenarios and good partition keys for each: • If you're implementing a user profile back end, the user ID is a good choice for partition key. • If you're storing IoT data, for example, device state, a device ID is a good choice for partition key. • If you're using Azure Cosmos DB for logging time-series data, the hostname or process ID is a good choice for partition key. • If you have a multitenant architecture, the tenant ID is a good choice for partition key. In some use cases, like IoT and user profiles, the partition key might be the same as your ID (document key). In others, like the time-series data, you might have a partition key that's different from the ID. Partitioning and logging/time-series data One of the common use cases of Azure Cosmos DB is for logging and telemetry. It's important to pick a good partition key, because you might need to read/write vast volumes of data. The choice depends on your read-and-write rates and the kinds of queries you expect to run. Here are some tips on how to choose a good partition key: • If your use case involves a small rate of writes that accumulate over a long time and you need to query by ranges of time stamps and other filters, use a rollup of the time stamp. For example, a good approach is to use date as a partition key. With this approach, you can query over all the data for a date from a single partition. • If your workload is written heavy, which is more common, use a partition key that's not based on time stamp. With this approach, Azure Cosmos DB can distribute writes evenly across various partitions. Here a hostname, process ID, activity ID, or another property with high cardinality is a good choice. • Another approach is a hybrid one where you have multiple containers, one for each day/month, and the partition key is a granular property like hostname. This approach has the benefit that you can set different throughput based on the time window. For example, the container for the current month is provisioned with higher throughput because it serves reads and writes. Previous months are provisioned with lower throughput because they only serve reads. Partitioning and multitenancy If you're implementing a multitenant application by using Azure Cosmos DB, there are two popular patterns: one partition key per tenant and one container per tenant. Here are the pros and cons for each: • One partition key per tenant. In this model, tenants are collocated within a single container. But queries and inserts for items within a single tenant can be performed against a single partition. You can also implement transactional logic across all items within a tenant. Because multiple tenants share a container, you can save storage and throughput costs by pooling resources for tenants within a single container rather than provisioning extra headroom for each tenant. The drawback is that you don't have performance isolation per tenant. Performance/throughput increases apply to the entire container versus targeted increases for tenants. • One container per tenant. In this model, each tenant has its own container, and you can reserve performance per tenant. With the Azure Cosmos DB new provisioning pricing, this model is more cost-effective for multitenant applications with a few tenants. You can also use a combination/tiered approach that collocates small tenants and migrates larger tenants to their own container. Next steps In this article, we provided an overview of concepts and best practices for partitioning with any Azure Cosmos DB API.
__label__pos
0.767932
How to make replies from SDP portal send as, or show who is sending the reply? How to make replies from SDP portal send as, or show who is sending the reply? When a technician replies to a ticket in SDP, we use the default template and most of the time we forget to sign our names at the bottom.  I recently realized that when the requester recieves it, they don't actually know who they are talking to. The email comes from Qosina Help Desk <HelpDesk Email Address> so they can't go off sender. If we add the $Technician palceholder, it adds the assigned techncians name to the email instead of the techincian actually responding. Is there a place a holder I am missing? Or is their a way to have the SDP email server to spoof the techinian replying? Like it just changes the display name so it will appear as Technician <HelpDesk Email Address> or something to that affect?
__label__pos
0.903433
http://www.sqlservercentral.com/blogs/chadmiller/2012/11/06/scripting-ssis-package-deployments/ Printed 2014/07/30 01:12AM Scripting SSIS Package Deployments By Chad Miller, 2012/11/06 Before I delve into the subject of scripting SSIS package deployments, I’m going to take a slight detour and explain why the general subject of automating deployments is important. Automating Deployments One of the keys areas you should be looking at automation, possibly through scripting is deployments. If you’re doing deployments and you’re not using a scripted repeatable process, free of GUI and typing then you’re doing it wrong. I’ve seen many times in my career where deployments too often rely on complex instructions rather than using a tested script-based approach. It is inevitable; relying on even the most detailed step-by-step manual instructions will lead to deployments errors because of the human operator factor. When it actually comes time to deploy changes there should be zero typing or clicking. And if it’s not a fully automated deployment then any manual steps should be made as simple as possible such as “run this script with these parameters through copy and paste and report back results.” End Detour.  SSIS Package Deployments My SSIS package deployment requirements: 1. The solution must support 2005, 2008, 2008 R2 and 2012 because I have a mixed environment 2. The solution must support deploying to a SQL Server data storage in msdb from a dtsx file 3. The solution must include verification of package installation 4. The solution must be able to create any needed folder structures automatically 5. The solution must include error handling and detailed output on operations performed 6. The solution must support constrained parameters based on using SQL Server data store of a ServerInstance, the dtsx file and the full destination path on the SSIS server When automating any task I’ll see if there’s already a solution either from Microsoft or third parties. I couldn’t find anything that out-of-the-box does meet all my requirements, but I did find two ways which provide partial solutions. The first, writing Powershell code directly against Microsoft.SqlServer.ManagedDTS like I’ve done in the SSIS Powershell module I created for SQL Server Powershell Extensions. There’s is a function in the SSIS module called Copy-ISItemFileToSQL, however it provides only part of the solution and there’s a bigger problem of incompatibilities between versions to handle. The assembly for SSIS changes between 2005 and 2008/2008 R2 and 2012 which make crafting a complete solution difficult. I’ve given up on going down this path because it quickly becomes complex. The second option and the one I went with, is to use the command-line utility dtutil.exe. The nice thing about dtutil–its included with  SQL Server 2005 and higher, well-documented and removes some of complexity of coding against the SSIS classes directly.Although dtutil.exe only meets requirements 1 through 3 above, I can fill in the rest with a bit of Powershell code. I present my Powershell script solution install-ispackage.ps1. Using Install-ISpackage To use install-ispackage simply download the script and from PoshCode and run by providing three parameters. Here’s an example of installing a dtsx file to my SSIS server: 1 ./install-ispackage.ps1 -DtsxFullName "C:\Users\Public\bin\SSIS\sqlpsx1.dtsx" -ServerInstance "Z001\SQL1" -PackageFullName "SQLPSX\sqlpsx1" Install-ISPackage Explanined The install-ISPackage script provides an example of how you can approach calling native console applications (exe’s) from Powershell. You see error handling and handling output differs greatly when calling an exe vs. using cmdlets or .NET code. The former does not trigger errors and instead relies on exit codes defined by the console application developer. You have to check lastexitcode and read whatever documentation is provided with console application to determine what the exit codes mean. I’ll step through a few things to explain: When I’m dealing with scripts that make changes I like to set $ErrorActionPreference to Stop instead of the default of Continue. This way I can wrap some error handling and logging around any errors and be assured the script won’t proceed to the next step should an error occur. I also like to make the exit code more user friendly. I’ll do this by reading the documentation for the command-line utility. On the msdn page for dtutil there a nice table under dtutil Exit Codes which I then create as a hashtable at the top of the script: 1 2 3 4 5 6 $exitCode = @{ 0="The utility executed successfully." 1="The utility failed." 4="The utility cannot locate the requested package." 5="The utility cannot load the requested package." 6="The utility cannot resolve the command line because it contains either syntactic or semantic errors"} I can then return a more useful error message by using the hastable with the built-in variable $lasterrorcode: 1 throw $exitcode[$lastexitcode] You’ll notice in the Get-SqlVersion function I’m just using the classic sqlcmd.exe console application to run a query to get the SQL Server version number: 1 $SqlVersion = sqlcmd -S "$ServerInstance" -d "master" -Q "SET NOCOUNT ON; SELECT SERVERPROPERTY('ProductVersion')" -h -1 -W I choose to use sqlcmd.exe instead of invoke-sqlcmd Powershell cmdlet because it’s installed on every SQL 2005 machine and it’s easier to use when I just want to return a single string: 1 2 C:Users\Public\bin\>Get-SqlVersion -ServerInstance Z001\sql1 10.50.2550.0 The Set-DtutilPath function tries to find the “right” dtutil.exe based on the SQL version being deployed to. You see although parameters for dtutil.exe are identical between version the utility isn’t backwards or forward compatible. You have to use the 9.0 version for 2005,  the 10.0 version for both 2008 and 2008 R2 and the 11.0 version for 2012. The rest of the functions follow a basic pattern: Run dtutil.exe and save the output to $result variable $result will be an array of strings so create a single string separated by newlines: 1 $result = $result -join "`n" Rather than returning an error on failure or nothing on success, instead return an object with details of what was run: 1 2 3 4 5 6 new-object psobject -property @{ ExitCode = $lastexitcode ExitDescription = "$($exitcode[$lastexitcode])" Command = "$Script:dtutil /File `"$DtsxFullName`" /DestServer `"$ServerInstance`" /Copy SQL;`"$PackageFullName`" /Quiet" Result = $result Success = ($lastexitcode -eq 0)} I really like using this technique so that if there are failures as part of troubleshooting you can just run the Command property and you get other relevant details. The key here is you can always get back to the base utility so if something doesn’t work in the script you can prove it’s not the script when you get the same error in the utility alone. Note: I have seen errors a few times, usually because a developer will create an SSIS package in later version than the server being deployed to. Check the $lasterrorcode after calling the utility and returning an object with details: 1 2 3 if ($lastexitcode -ne 0) { throw $exitcode[$lastexitcode] } Here I’ll use the hashtable defined at the top of script to return a more friendly error message. If errors occur between the error returned and the detailed object I can troubleshoot any issues. The rest of the functions follow a similar pattern. I will point out a non-zero exit code doesn’t necessarily mean an error. Some console application developers will use error code of 1 or other numbers to mean something other than error as is the case when testing if a folder path exists in dtutil. If it doesn’t exist an error code of 1 is returned. Sometimes it’s hard to determine when a non-zero error code means something other than error except through using the utility. Fortunately Powershell cmdlets don’t use weird exit codes to return status, they generally return an object or error object, but if you’re going to write Powershell scripts against command-line utilities you’ll need to be aware of exit codes and specific exit code meaning for the utility you’re using. The other thing I’ll point out is the logic to create nest folder paths in Get-FolderList  and new-folder functions. The functions are in place to satisfy my fourth requirement to automatically create folders if they don’t exist. The main section executes the series of functions in order, wrapped in a try/catch block and since I set my $ErrorAction and check the $lasterrorcode throwing an error in each function, the script will stop should an error occur. Copyright © 2002-2014 Simple Talk Publishing. All Rights Reserved. Privacy Policy. Terms of Use. Report Abuse.
__label__pos
0.586675
Take the 2-minute tour × Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required. There are these finite fields of characteristic $p$ , namely $\mathbb{F}_{p^n}$ for any $n>1$ and there is the algebraic closure $\bar{\mathbb{F}_p}$. The only other fields of non-zero characteristic I can think of are transcendental extensions namely $\mathbb{F}_{q}(x_1,x_2,..x_k)$ where $q=p^{n}$. Thats all! I cannot think of any other fields of non-zero characteristic. I may be asking too much if I ask for characterization of all non-zero characteristic fields. But I would like to know what other kinds of such fields are possible. Thanks. share|improve this question      Every field $F$ of characteristic $p$ is the image of some polynomial ring $\mathbb Z_p[X]$ where $X$ is some set of variables. That does not determine the field uniquely, of course. :) –  Thomas Andrews Nov 2 '11 at 18:43      @Thomas Andrews You mean ring-homomorphic image? –  Dinesh Nov 2 '11 at 18:46      Yes. So, for every $F$, there is a set $X$ (you can always find $|X|\leq |F|$,) and a maximal ideal $I\subset\mathbb{Z}_p[X]$ so that $F\cong \mathbb{Z}_p[X]/I$ –  Thomas Andrews Nov 2 '11 at 18:48 1   No, two such ideals can give the same $F$. You can see that with $F_{p^k}=\mathbb{Z}_p[x]/<\pi(x)>$ where $\pi(x)$ can be any prime polynomial of degree $k$. That's why it is trickery - different ideals can give the same field. –  Thomas Andrews Nov 2 '11 at 19:09 2   @Dinesh: You have also fields of power series $\mathbb F_p((x_1,...,x_n))$. –  user18119 Nov 2 '11 at 19:51 3 Answers 3 up vote 9 down vote accepted There are finite extensions of the transcendental fields you've written down. Indeed, since $k(x_1,\ldots,x_n)$ is not algebraically closed when $n \geq 1$, no matter what field $k$ of coefficients you choose, it has non-trivial finite extensions. The classification of these fields is not a simple matter; in fact, it is one of the main topics of algebraic geometry. (One can think of it as being the problem of classifying $n$-dimensional varieties up to birational equivalence.) In any case, I would say that these fields, for some choice of $n$ (possibly $0$), and with $k$ equal to $\mathbb F_q$ or $\overline{\mathbb F}_p$, are the characteristic $p$ fields that arise the most often in practice. [Also: one reason that you can't think of other examples is that any field of char. $p$ which is finitely generated over its prime subfield $\mathbb F_p$ is a finite extension of $\mathbb F_p(x_1,\ldots,x_n)$ for some $n$; that is also why these tend to be the examples that arise most often.] share|improve this answer      I believe the only obvious class of fields I missed in the question is the finite extensions of transcendental extensions of finite fields(and its algebraic closure). My question is these are the only fields I can think of and I want some not so obvious fields other than them if they exist. Thanks. –  Dinesh Nov 2 '11 at 18:18 1   @Dinesh: Dear Dinesh, You can always take the algebraic closure of, e.g. $\mathbb F_p(x)$, then form transcendental extensions of these, then take finite extensions of those, and then perhaps the algebraic closure of those, then take transcendental extensions of those, and continue to iterate, even transfinitely if you like. Nevertheless, the "obvious" examples are the ones that tend to come up in practice, at least in my experience. Regards, –  Matt E Nov 2 '11 at 18:21 1   Also, it ought to be possible to show using Zorn's lemma that every field of characteristic $p$ is produced by some (possibly transfinite) chain of finite and/or transcendental extensions starting with $\mathbb F_p$. –  Henning Makholm Nov 2 '11 at 18:31 1   No -- the "ought" indicates that I made it up on the spot, but it sounded plausible. I imagine using Zorn's Lemma on the set of all extension chains of subfields, ordered by sequence prefixes, and get a maximal extension chain. It should be easy to see that a maximal extension chain must end with the entire field. –  Henning Makholm Nov 2 '11 at 18:40 2   A simpler statement is true: every field is an algebraic extension of a purely transcendental extension on its prime subfield. This can proven using Zorn to get a maximal algebraically independent set. Every algebraic extension can be expressed as a (transfinite) chain of finite extensions. –  Chris Eagle Nov 2 '11 at 18:50 The basic structure theory of fields tells us that a field extension $L/K$ can be split into the following steps: 1. an algebraic extension $K^\prime /K$, 2. a purely transcendental extension $K^\prime (T)/K^\prime$, 3. an algebraic extension $L/K^\prime (T)$. The field $K^\prime$ is the algebraic closure of $K$ in $L$ and thus uniquely determined by $L/K$. The set $T$ is a transcendence basis of $L/K$; its cardinality is uniquely determined by $L/K$. A field $L$ has characteristic $p\neq 0$ iff it contains the finite field $\mathbb{F}_p$. Hence you get all fields of characteristic $p$ by letting $K=\mathbb{F}_p$ in the description of field extensions, and by chosing $T$ and $K^\prime$ and $L/K^\prime (T)$ as you like. Of course in general it is then hard to judge whether two such fields are isomorphic - essentially because of step 3. share|improve this answer 1   Why do you include step 1? Every field extension is an algebraic extension of a purely transcendental extension. –  Chris Eagle Nov 2 '11 at 19:08      Thanks. This is a very systematic way of seeing it. –  Dinesh Nov 2 '11 at 19:09      I included step 1 because transcendental extensions $L/K$, where $K$ is algebraically closed in $L$, are easier to handle than the general case. Moreover the algebraic extensions of $K$ might be much simpler than those of $L$ -- like in the case we discuss here. –  Hagen Nov 2 '11 at 22:14 No need to limit yourself to a finite number of transcendentals... So $\mathbb F_q(x_1,x_2,\dots,x_n,\dots)$ is another example. You can also use $\bar{\mathbb{F}_p}$ as the coefficient field. Many combinations are possible. What characterization are you after? share|improve this answer 2   Yeah of course, I was asking whether there are any other fields other than these(which I mentioned in the question) fields of non-zero characteristic.If yes, I would like to see them –  Dinesh Nov 2 '11 at 18:14 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.615533
Cloud Mail 2020 Python Edition Questions / Feedback? fetch_message_text Method Retrieves the message text of messages specified by the MessageSet property. Syntax def fetch_message_text() -> None: ... Remarks This method retrieves the RFC822-encoded text of the message specified by message_set. If the local_file property contains a file name, the text is stored in local_file, and the on_transfer events denote the progress. If the local_file property contains an empty string, the text is stored in the message_text property, as well as provided through the on_transfer event.     Copyright (c) 2021 /n software inc. - All rights reserved. Cloud Mail 2020 Python Edition - Version 20.0 [Build 7718]  
__label__pos
0.907463
5 I'm hosting an image sharing site and I seem to be running into an unusual problem. I use the Apache module mod_rewrite to make all the urls to each image much shorter than they would be otherwise, but this seems to be preventing other modules such as mod_bw or mod_bandwidth from working, since the user isn't technically requesting to download a file. My problem occurs when someone uploads a 2mb animated gif. Sometimes the gif will steal all the bandwidth to the server and render my site useless. I need a way to detect when users want to view gifs and then limit their speed to something more reasonable. The only way I can think about doing this is if there was some Apache module that detected the .gif at the end of the url, and then kicked on the bandwidth limiting. Is this even possible? Or is there something else I can do? 2 You're looking for mod_cband to do what you need. You wrap it's directives inside a LocationMatch container for .gif files for example. If for some reason you're running an old Apache 1.3 look up mod_bandwidth or mod_throttle instead. http://codee.pl/cband.html | improve this answer | | 0 nginx has this function. You can make nginx a reverse proxy for Apache. Sample code: location /download/ { limit_rate 10k; } | improve this answer | | Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.832836
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Join them; it only takes a minute: Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer 3. The best answers are voted up and rise to the top I am trying to plot a piecewise function in Maple and have the endpoints of each piece be an open circle or closed circle accordingly. To define the function, I did: f := proc (x) options operator, arrow; piecewise(x <= -1, -2-x, -1 < x and x <= 1, 4-4*x^2, 1 < x, x^2-4*x+3); The to get a picture I did: plot(f(x), x = -4 .. 4, discont = true) I'd like to have the point $(-1,0)$, coming from the parabola, to have an open dot. I can't figure out how to make that work. I am brand new to Maple. share|cite|improve this question up vote 1 down vote accepted For this particular case use: with(plottools):with(plots): display(circle([-1, 0], 0.05, color = red), plot(f(x), x = -4 .. 4, discont = true)); share|cite|improve this answer One potential issue with using plottools:-circle is that the shape can appear as an ellipse if, say, the displayed x- and y-views are not equal and the scaling is not constrained. (Eg. change the x-range to x=-1.4 .. -0.9 to see this effect.) Also, such a circle can get rendered a little roughly. An alternative is to use plottools:-point and set the symbol to circle or solidcircle, which should render as circles regardless of the view or scaling. You can even program it to compute the limits and value, etc. restart: with(plottools):with(plots): f := x -> piecewise(x <= -1, -2-x, -1 < x and x <= 1, 4-4*x^2, 1 < x, x^2-4*x+3): setoptions(color="DarkRed", symbolsize=16, symbol=circle); display(point([-1, limit(f(x),x=-1,right)]), point([-1, limit(f(x),x=-1,left)]), point([-1, f(-1)], symbol=solidcircle), plot(f(x), x = -4 .. 4, discont = true)); In the above code a few plot options are set as new defaults, which can get overridden with optional arguments to the plotting commands. (Eg. symbol=solidcircle, when passed as an option, overrides the new default of symbol=circle.) share|cite|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.864797
How To Code in Python How to Code in Python: A Beginner’s Guide Introduction Python is a high-level, interpreted programming language that is widely used in various fields, including web development, data science, artificial intelligence, and machine learning. It is known for its simple and easy-to-read syntax, making it an ideal language for beginners to start coding. If you’re new to programming or have some experience with other languages, this guide will provide you with the foundational knowledge and skills to start coding in Python. Setting Up Your Environment Before you start coding in Python, you need to set up your environment. This involves installing Python and a code editor or integrated development environment (IDE). Installing Python Python comes in two main versions: Python 2 and Python 3. While Python 2 is still used in some legacy systems, it is recommended to use Python 3 as it has better support and features. To install Python 3, follow these steps: 1. Go to the official Python website (https://www.python.org/downloads/). 2. Download the latest version of Python 3 for your operating system. 3. Run the installer and follow the prompts to complete the installation. Choosing a Code Editor or IDE A code editor or IDE is a program that allows you to write and edit code. There are many code editors and IDEs available for Python, both free and paid. Some popular code editors and IDEs for Python include: • Visual Studio Code • PyCharm • Spyder • Sublime Text • Atom Choose an editor or IDE that suits your needs and preferences. Basic Syntax and Variables Basic Syntax Python has a simple and easy-to-read syntax. Unlike other languages that use curly braces to indicate blocks of code, Python uses indentation. Here’s an example of a simple Python program that prints "Hello, World!": print("Hello, World!") Variables Variables are used to store and manipulate data in a program. In Python, you can assign a value to a variable using the equal sign (=). Here’s an example of how to assign a value to a variable: x = 5 In this example, the variable x is assigned the value 5. Data Types and Operators Data Types Python has several built-in data types, including: • Integers (int) • Floating-point numbers (float) • Strings (str) • Booleans (bool) • Lists • Tuples • Dictionaries Here’s an example of how to create a list in Python: my_list = [1, 2, 3, 4, 5] Operators Operators are used to perform operations on variables and values. Python has several types of operators, including: • Arithmetic operators (+, -, *, /, %, **) • Comparison operators (==, !=, >, =, 10: print("x is greater than 10") elif x < 10: print("x is less than 10") else: print("x is equal to 10") In this example, the program checks whether x is greater than, less than, or equal to 10, and prints the appropriate message. Loops Loops are used to execute a block of code multiple times. Python has two main types of loops: for loops and while loops. Here's an example of how to use a for loop in Python: my_list = [1, 2, 3, 4, 5] for x in my_list: print(x) In this example, the program loops through each item in the list and prints it. Functions and Modules Functions Functions are used to group related code and perform a specific task. In Python, you can define a function using the keyword def. Here's an example of how to define a function in Python: def add_numbers(x, y): return x + y In this example, the function add_numbers takes two arguments (x and y) and returns their sum. Modules Modules are used to organize code into reusable units. In Python, a module is a file containing Python code that can be imported into other files. Here's an example of how to import a module in Python: import math In this example, the program imports the math module, which contains functions for mathematical operations. Conclusion Python is a powerful and versatile programming language that can be used for a variety of tasks. In this guide, we covered the basics of Python syntax, data types and operators, control flow and loops, functions and modules. By following this guide and practicing your coding skills, you'll be well on your way to becoming a proficient Python programmer. Good luck! Leave a Comment Your email address will not be published. Required fields are marked *
__label__pos
0.993663
Community cancel Showing results for  Search instead for  Did you mean:  Highlighted New Contributor III 41 Views What is an unknown data source? The Intel manual mentions that some of the performance events exclude "unknown data source." These include the following events on Broadwell: MEM_LOAD_UOPS_RETIRED.L1_MISS, MEM_LOAD_UOPS_RETIRED.L2_MISS, and MEM_LOAD_UOPS_RETIRED.L3_MISS. and the following events on Haswell: MEM_LOAD_UOPS_RETIRED.L2_MISS and MEM_LOAD_UOPS_RETIRED.L3_MISS. and the MEM_LOAD_UOPS_RETIRED.L2_MISS event on Ivy Bridge. There are apparently no "unknown data sources" on Skylake. (1) What is an unknown data source? (2) When and why unknown data sources occur? (3) Does MEM_UOPS_RETIRED.ALL_LOADS count unknown data sources? (4) Are there really no unknown data sources on Skylake or are they counted by the events? (5) Why does MEM_LOAD_UOPS_RETIRED.L1_MISS include unknown data sources on Ivy Bridge and Haswell but not on Broadwell? Why does MEM_LOAD_UOPS_RETIRED.LLC_MISS include unknown data sources on Ivy Bridge but not on Haswell and Broadwell? (6) The online documentation (https://download.01.org/perfmon/index/) also says that some of the cache hit events also exclude unknown data source on some of these microarchitectures. What does that mean? How can a cache hit have an unknown data source? I find all of this very confusing. 0 Kudos 0 Replies
__label__pos
1
Revision history [back] how can I boot a snapshot with same network interfaces as the original virtual machine? Is there a way where I can nova boot a snapshot and assign an ip? For instance I have a virtual machine running, i have taken a snapshot of it and the Openstack infrastructure fails, when the infrastructure is up again, I want to relaunch the snapshot with the same internet protocol address as the original virtual machine. I can see a way for fixed addresses, but for floating ip how could I do it?
__label__pos
0.99998
Hell: Shell scripting Haskell dialect Hell is a shell scripting language that is a tiny dialect of Haskell that I wrote for my own shell scripting purposes. As of February, I’m using Hell to generate this blog, instead of Hakyll.1 My 2024 New Year’s Resolution is to write more shell scripts in the name of automation. I’ve always avoided this because of the downsides of bash. And other problems. Bash, zsh, fish, etc. have problems: They’re incomprehensible gobbledegook. They use quotation (x=$(ls -1) ..) which makes it easy to make mistakes. They lean far too heavily on sub processes to do basic things. Therefore things like equality, arithmetic, ordering, etc. are completely unprincipled. Absolutely full of pitfalls.2 But, bash does have some upsides: It’s stable, it’s simple, and it works the same on every machine. You can write a bash script and keep it running for years while never having to change any code. The code you wrote last year will be the same next year. So in the interest of defining a language that I would like to use, let’s discuss the anatomy of a shell scripting language: It should be very basic. It should run immediately (no visible compilation steps). It should have no module system. It should have no package system. It should have no abstraction capabilities (classes, data types, polymorphic functions, etc.). And it does not change in backwards-incompatible ways.3 Why no module or package system? They make it harder for a system to be “done.” There always some other integration that you can do; some other feature. I’d prefer Hell to be cold-blooded software, there’s beauty in finished software. Based on the above I can define a “Scripting Threshold” meaning, when you reach for a module system or a package system, or abstraction capabilities, or when you want more than what’s in the standard library, then you probably want a general purpose programming language instead. Taking this into consideration, I opted for making a Haskell dialect4 because: I know Haskell. It’s my go-to. It has a good story about equality, ordering, etc., it has a good runtime capable of trivially doing concurrency, it’s garbage collected, no funny business, it distinguishes bytes and text properly, it can be compiled to a static Linux x86 binary, it performs well, and it has static types! I made the following decisions when designing the language: Use a faithful Haskell syntax parser. It’s better that way; you get re-use. It has no imports/modules/packages. It doesn’t support recursive definitions, but can use fix to do so. It supports basic type-classes (Eq, Ord, Show, Monad), which are needed for e.g. List.lookup and familiar equality things. It does not support polytypes. That’s a kind of abstraction and not needed. It use all the same names for things (List.lookup, Monad.forM, Async.race, etc.) that are already used in Haskell, which lets me re-use intuitions. You can download statically-linked Linux binaries from the releases page. To read about the implementation internals, see Tour of Hell which is a set of slides I made for presenting Hell at work. 1. Tired of issues like this.↩︎ 2. Just check out the huge list of linting issues in ShellCheck.↩︎ 3. See also: Escaping the Hamster Wheel of Backwards Incompatibility↩︎ 4. And not using some other alt. shell scripting language or using Elixir, or Oil.↩︎
__label__pos
0.566675
5 $\begingroup$ I need a convincing math problem or real-world example, to show that simplifying algebraic expressions such as $2x+7y-3+6y-9x-2$ is really important and profitable, and it's reasonable to learn how to do it. Do you know any such problem or example? P.S. The problem or example should be comprehensible for a student which has just got to know the algebraic expressions of degree one; no familiarity with graphing equations or using algebra for solving challenging problems. $\endgroup$ • 2 $\begingroup$ As Euclid said many years ago, when asked by a student what he would gain by studying geometry, "Give him threepence, since he must make gain out of what he learns." $\endgroup$ – user52817 Oct 20 '16 at 11:43 • 4 $\begingroup$ The question seem a bit odd; algebraic simplification and combining like terms show up as a step in almost every problem. It is not, however, almost ever the end goal. What is actually desired here? $\endgroup$ – Adam Oct 20 '16 at 13:05 • 1 $\begingroup$ @Adam I added an explanation to the post. Would you give an ELEMENTARY example of what you mean? $\endgroup$ – Behzad Oct 20 '16 at 13:27 • 2 $\begingroup$ This is like asking why the letter "M" is useful in English. It is one of the basic building blocks. The letter "M" is not so useful on its own (unless you are expressing that your meal is particularly scrumptious), but it is useful together with all of the other letters. Even the letters themselves are not useful: only when combined into words, phrases, sentences, novels. However, you cannot write a novel someday if you refuse to learn the letter M. This is similar. Hopefully you can convey that math literacy is as useful as English, and you need to establish the basics first. $\endgroup$ – Steven Gubkin Oct 20 '16 at 14:37 • 2 $\begingroup$ The question is reasonable: An elementary example that can be understood immediately can be good for motivation. "It is used all the time in more advanced contexts." is not quite as satisfactory. $\endgroup$ – Tommi Brander Oct 20 '16 at 16:26 5 $\begingroup$ IS there a number $x$ such that $2x-7−3x+6−9x−2+10x=0$ ? Is there a number $x$ such that $2x-7−3x+6−9x−2+11x=0$? You can encapsulate these question in word problems easily. For example: "Alicia has received and made gifts in dollar and bitcoins: she got two bitcoins, then gave 7 dollars, then offered 3 bitcoins, then received 6 dollars, gave 9 bitcoins, gave 2 dollars, and last received 10 bitcoins. Brahimi says the bitcoin rate is unsufficient for her to break even. Is he right? At what bitcoin rate would she break even?" The point of the particular two examples is that although they look very similar, their answers are very different. Once simplified, one sees immediately why. $\endgroup$ 4 $\begingroup$ The question seems to overlook the value of brevity in our writing, mathematical or otherwise. Shorter is better -- it makes more efficient use of our writing and time, makes things easier to read and understand, and also reduces possible errors (due to fewer parts to possibly mis-read or transcribe). That might bear saying in a class once and should be a fairly obvious point. If one were in an English class and had overly verbose, rambling prose that needed editing down (while keeping the meaning the same), hopefully one would likewise see the value in that. I suspect that finding a "real-world example" is unlikely, because this is something done in the pure algebraic expression alone. It does not reflect a change in any outside model or application. $\endgroup$ 2 $\begingroup$ Your restrictions are quite limiting. But never the less, for a linear expression, you can definitely do something. Approach A I have had my student write stories for an expression. So something in like $$ 3x + 4y - 5 $$ You could say that $x$ represents the number of oranges and $y$ the number of apples. The students could write something like: Joanne sold $x$ oranges for \$3.00 each, $y$ apples for \$4.00 each, and \$5.00 were stolen from the till and for a different expression $$ 6x + 7y -3y +18 -2x -23 $$ the students could come up with a story like Jim sold $x$ apples for \$6.00 each, $y$ oranges for \$7.00 each, bought another $y$ oranges for \$3.00 each, found \$18.00 lying on the ground, bought $x$ apples for \$2.00 each, and spent \$23.00 for a new kitchen table So maybe they're hoakie. But now we can ask, "Who has more money? Jim or Joanne? Explain your answer." There are all sorts of variations on this. What I'd stress here isn't an application of simplifying algebraic expressions, I'd stress that expressions can be tied to stories. The more students play with and create these sorts of stories, the less intimidating word problems are for them. But it does take a lot of work. And, it takes a lot of convincing to get them to realize what is and isn't a good story. At first, many of their stories will be nonsensical (I'm assuming you're teaching high school or junior high school students). OK. So there's an application---contrived as it is---that you could use in your class. But, if you allow graphing and the students have spent some time working with quadratic equations, then you have a lot more room to show the importance of simplifying. Approach B Here the context is to talk about scientific exploration/inquiry. If you develop a theory, you might be inclined to express the relationship between two quantities in one form, but after collecting data, it might be more sensical to present a relation between the two quantities differently. You can present a situation where two different people have come up with two different expressions: Kate came up with the following equation for one set of data $$ y = (x+2)(x-4) $$ Ken had a different set of data and wrote out the following equation $$ y = (x-1)^2 - 9 $$ Now provided you've talked about graphing, zeroes of polynomials, and the vertex of a parabola, then you can talk about a context where simplifying can answer questions. You can ask, "Could the data Ken and Kate collected come from similar experiments?" Just looking at the data wouldn't suffice since the data sets are different. But, if the students know how to simplify the expressions, then they can answer the question. Summary Granted you might feel you want a clean example when first presenting the notion of gathering like-terms and simplying expressions. But sometimes, you just don't have enough to work with yet to make the point. It's important to teach your students that there isn't always an immediate answer to why we do something. But then promise them that you'll revisit the idea in a few weeks when they've developed some more skills and a few more ideas. If you're good to your promises, in my experience, the students will often play along. In all of this, long range planning is important. You need to know what skills and knowledge base you're starting with in your students; how you plan to build on that; and, where you intend to get. Sometimes, to create interesting and convincing "real world" applications, you have to be creative in how you present the material in the first place. $\endgroup$ 0 $\begingroup$ How about counting proteins, fats, and carbs? For example, 100 g of bread contain • total fat - 3 g, including saturated fat - 1 g • total carbs - 51 g, including fiber - 2 g, sugars - 4 g • proteins - 8 g The nutritional value of 1 g of fat is 9 kcal, 1 g of protein - 4 kcal, 1 g of carbs but not dietary fiber - 4 kcal, dietary fiber - 0. Suppose that $x$ is the weight of one piece of bread in grams. You eat two pieces of bread for breakfast, three pieces of bread for lunch, and one as an afternoon snack. What's the amount of fats, sugars, non-fiber carbs and the total nutritional value of all the bread you eat in the whole day? Later, when you introduce systems of linear equations, you can pose questions like planning daily menu out of, say, bread, eggs and salmon so that the total calorie intake is 2000 with 50% coming from carbs, 30% coming from protein and 20% coming from fats. $\endgroup$ Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.938505
Winsock Problem Hello, On an environment of Windows XPs and 2008 server over MPLS network, since last week, we are getting winsock errors on all of our stores. It gave out an error saying it is requiring a license to use the winsock. How do I do this? Because of this issue, we cannot transfer the data from the HQ to the stores. Where can we upgrade the license if that is the case, and how would we do this? Thank you, KVISAsked: Who is Participating?   Michael-BestConnect With a Mentor Commented: When a WinSock function fails, the resulting error value is the key to why it failed. And knowing why it failed is the key to finding a remedy or work-around. http://www.sockets.com/a_c.htm Repair/Reset Winsock settings http://windowsxp.mvps.org/winsock.htm 0   LeeTutorretiredCommented: I've requested that this question be deleted for the following reason: Not enough information to confirm an answer. 0   Michael-BestCommented: My posting (ID: 38400776) may be the only posting to this question, but that does not suggest that it be regarded as "Not enough information to confirm an answer" If the question asker refuses my answer then so be it, but I believe that I correctly reserched the problem and correctly answered, thus for reference to others the question should not be deleted. 0   KVISAuthor Commented: In Access DB, there was a new field which needed to be filled. It was a data error, but displayed winsock. Sorry for the delay, we finally got hold of the original programmer and he was going to fix it, but left before it was fixed. 0   Michael-BestCommented: Glad to help. 0 Question has a verified solution. Are you are experiencing a similar issue? Get a personalized answer when you ask a related question. Have a better answer? Share it in a comment. All Courses From novice to tech pro — start learning today.
__label__pos
0.687455
Create Form Task - Updating Textfields with values based on Dropdown Selection Hello, I am trying to create a form where the values of the dropdown list will cause text fields to be pre-populated. For example, the dropdown list contains value1, value2, and value3. Each value has a corresponding dictionary that holds information, such as <name, Bob>. Depending on the selection made, the text fields will be updated with the necessary information. Using the key-value pair stated earlier, a text field called “name” will pre-populate with “Bob”. How can I achieve this? I am unfamiliar with how to create Advanced Logic within the form components, so any guidance would be appreciated! Hi @jchieng, Please refer attached sample workflow for Action Center forms. This contains dynamic dropdowns functionality with form task. You can use this to customize for your requirement. Test Action Center.zip (4.2 KB) Hope this helps. Regards Sonali I appreciate the suggestion. I have come across this and it seems to only work for the case of conditional dropdowns where a parent dropdown will affect the child dropdown. How can I get a dropdown to affect text fields instead? The suffix “_parent” will not work on text fields from my understanding. @jchieng You could try writing JavaScript in logic of your text fields to auto populate when values in the dropdowns change. See samples of JavaScript usage in actions This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.
__label__pos
0.990786
[gdiplus_winetest] [reactos.git] / rostests / winetests / gdiplus / graphicspath.c 1 /* 2 * Unit test suite for paths 3 * 4 * Copyright (C) 2007 Google (Evan Stade) 5 * 6 * This library is free software; you can redistribute it and/or 7 * modify it under the terms of the GNU Lesser General Public 8 * License as published by the Free Software Foundation; either 9 * version 2.1 of the License, or (at your option) any later version. 10 * 11 * This library is distributed in the hope that it will be useful, 12 * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 14 * Lesser General Public License for more details. 15 * 16 * You should have received a copy of the GNU Lesser General Public 17 * License along with this library; if not, write to the Free Software 18 * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA 19 */ 20 21 #include "windows.h" 22 #include "gdiplus.h" 23 #include "wine/test.h" 24 #include <math.h> 25 26 #define expect(expected, got) ok(got == expected, "Expected %.8x, got %.8x\n", expected, got) 27 #define expectf(expected, got) ok(fabs(expected - got) < 2.0, "Expected %.2f, got %.2f\n", expected, got) 28 #define POINT_TYPE_MAX_LEN (75) 29 30 static void stringify_point_type(PathPointType type, char * name) 31 { 32 *name = '\0'; 33 34 switch(type & PathPointTypePathTypeMask){ 35 case PathPointTypeStart: 36 strcat(name, "PathPointTypeStart"); 37 break; 38 case PathPointTypeLine: 39 strcat(name, "PathPointTypeLine"); 40 break; 41 case PathPointTypeBezier: 42 strcat(name, "PathPointTypeBezier"); 43 break; 44 default: 45 strcat(name, "Unknown type"); 46 return; 47 } 48 49 type &= ~PathPointTypePathTypeMask; 50 if(type & ~((PathPointTypePathMarker | PathPointTypeCloseSubpath))){ 51 *name = '\0'; 52 strcat(name, "Unknown type"); 53 return; 54 } 55 56 if(type & PathPointTypePathMarker) 57 strcat(name, " | PathPointTypePathMarker"); 58 if(type & PathPointTypeCloseSubpath) 59 strcat(name, " | PathPointTypeCloseSubpath"); 60 } 61 62 /* this helper structure and function modeled after gdi path.c test */ 63 typedef struct 64 { 65 REAL X, Y; 66 BYTE type; 67 68 /* How many extra entries before this one only on wine 69 * but not on native? */ 70 int wine_only_entries_preceding; 71 72 /* 0 - This entry matches on wine. 73 * 1 - This entry corresponds to a single entry on wine that does not match the native entry. 74 * 2 - This entry is currently skipped on wine but present on native. */ 75 int todo; 76 } path_test_t; 77 78 static void ok_path(GpPath* path, const path_test_t *expected, INT expected_size, BOOL todo_size) 79 { 80 BYTE * types; 81 INT size, idx = 0, eidx = 0, numskip; 82 GpPointF * points; 83 char ename[POINT_TYPE_MAX_LEN], name[POINT_TYPE_MAX_LEN]; 84 85 if(GdipGetPointCount(path, &size) != Ok){ 86 skip("Cannot perform path comparisons due to failure to retrieve path.\n"); 87 return; 88 } 89 90 if(todo_size) todo_wine 91 ok(size == expected_size, "Path size %d does not match expected size %d\n", 92 size, expected_size); 93 else 94 ok(size == expected_size, "Path size %d does not match expected size %d\n", 95 size, expected_size); 96 97 points = HeapAlloc(GetProcessHeap(), 0, size * sizeof(GpPointF)); 98 types = HeapAlloc(GetProcessHeap(), 0, size); 99 100 if(GdipGetPathPoints(path, points, size) != Ok || GdipGetPathTypes(path, types, size) != Ok){ 101 skip("Cannot perform path comparisons due to failure to retrieve path.\n"); 102 goto end; 103 } 104 105 numskip = expected_size ? expected[eidx].wine_only_entries_preceding : 0; 106 while (idx < size && eidx < expected_size){ 107 /* We allow a few pixels fudge in matching X and Y coordinates to account for imprecision in 108 * floating point to integer conversion */ 109 BOOL match = (types[idx] == expected[eidx].type) && 110 fabs(points[idx].X - expected[eidx].X) <= 2.0 && 111 fabs(points[idx].Y - expected[eidx].Y) <= 2.0; 112 113 stringify_point_type(expected[eidx].type, ename); 114 stringify_point_type(types[idx], name); 115 116 if (expected[eidx].todo || numskip) todo_wine 117 ok(match, "Expected #%d: %s (%.1f,%.1f) but got %s (%.1f,%.1f)\n", eidx, 118 ename, expected[eidx].X, expected[eidx].Y, 119 name, points[idx].X, points[idx].Y); 120 else 121 ok(match, "Expected #%d: %s (%.1f,%.1f) but got %s (%.1f,%.1f)\n", eidx, 122 ename, expected[eidx].X, expected[eidx].Y, 123 name, points[idx].X, points[idx].Y); 124 125 if (match || expected[eidx].todo != 2) 126 idx++; 127 if (match || !numskip--) 128 numskip = expected[++eidx].wine_only_entries_preceding; 129 } 130 131 end: 132 HeapFree(GetProcessHeap(), 0, types); 133 HeapFree(GetProcessHeap(), 0, points); 134 } 135 136 static void test_constructor_destructor(void) 137 { 138 GpStatus status; 139 GpPath* path = NULL; 140 141 status = GdipCreatePath(FillModeAlternate, &path); 142 expect(Ok, status); 143 ok(path != NULL, "Expected path to be initialized\n"); 144 145 status = GdipDeletePath(NULL); 146 expect(InvalidParameter, status); 147 148 status = GdipDeletePath(path); 149 expect(Ok, status); 150 } 151 152 static void test_getpathdata(void) 153 { 154 GpPath *path; 155 GpPathData data; 156 GpStatus status; 157 INT count; 158 159 GdipCreatePath(FillModeAlternate, &path); 160 status = GdipAddPathLine(path, 5.0, 5.0, 100.0, 50.0); 161 expect(Ok, status); 162 163 /* Prepare storage. Made by wrapper class. */ 164 status = GdipGetPointCount(path, &count); 165 expect(Ok, status); 166 167 data.Count = 2; 168 data.Types = GdipAlloc(sizeof(BYTE) * count); 169 data.Points = GdipAlloc(sizeof(PointF) * count); 170 171 status = GdipGetPathData(path, &data); 172 expect(Ok, status); 173 expect((data.Points[0].X == 5.0) && (data.Points[0].Y == 5.0) && 174 (data.Points[1].X == 100.0) && (data.Points[1].Y == 50.0), TRUE); 175 expect((data.Types[0] == PathPointTypeStart) && (data.Types[1] == PathPointTypeLine), TRUE); 176 177 GdipFree(data.Points); 178 GdipFree(data.Types); 179 GdipDeletePath(path); 180 } 181 182 static path_test_t line2_path[] = { 183 {0.0, 50.0, PathPointTypeStart, 0, 0}, /*0*/ 184 {5.0, 45.0, PathPointTypeLine, 0, 0}, /*1*/ 185 {0.0, 40.0, PathPointTypeLine, 0, 0}, /*2*/ 186 {15.0, 35.0, PathPointTypeLine, 0, 0}, /*3*/ 187 {0.0, 30.0, PathPointTypeLine, 0, 0}, /*4*/ 188 {25.0, 25.0, PathPointTypeLine | PathPointTypeCloseSubpath, 0, 0}, /*5*/ 189 {0.0, 20.0, PathPointTypeStart, 0, 0}, /*6*/ 190 {35.0, 15.0, PathPointTypeLine, 0, 0}, /*7*/ 191 {0.0, 10.0, PathPointTypeLine, 0, 0} /*8*/ 192 }; 193 194 static void test_line2(void) 195 { 196 GpStatus status; 197 GpPath* path; 198 int i; 199 GpPointF line2_points[9]; 200 201 for(i = 0; i < 9; i ++){ 202 line2_points[i].X = i * 5.0 * (REAL)(i % 2); 203 line2_points[i].Y = 50.0 - i * 5.0; 204 } 205 206 GdipCreatePath(FillModeAlternate, &path); 207 status = GdipAddPathLine2(path, line2_points, 3); 208 expect(Ok, status); 209 status = GdipAddPathLine2(path, &(line2_points[3]), 3); 210 expect(Ok, status); 211 status = GdipClosePathFigure(path); 212 expect(Ok, status); 213 status = GdipAddPathLine2(path, &(line2_points[6]), 3); 214 expect(Ok, status); 215 216 ok_path(path, line2_path, sizeof(line2_path)/sizeof(path_test_t), FALSE); 217 218 GdipDeletePath(path); 219 } 220 221 static path_test_t arc_path[] = { 222 {600.0, 450.0, PathPointTypeStart, 0, 0}, /*0*/ 223 {600.0, 643.3, PathPointTypeBezier, 0, 0}, /*1*/ 224 {488.1, 800.0, PathPointTypeBezier, 0, 0}, /*2*/ 225 {350.0, 800.0, PathPointTypeBezier, 0, 0}, /*3*/ 226 {600.0, 450.0, PathPointTypeLine, 0, 0}, /*4*/ 227 {600.0, 643.3, PathPointTypeBezier, 0, 0}, /*5*/ 228 {488.1, 800.0, PathPointTypeBezier, 0, 0}, /*6*/ 229 {350.0, 800.0, PathPointTypeBezier, 0, 0}, /*7*/ 230 {329.8, 800.0, PathPointTypeBezier, 0, 0}, /*8*/ 231 {309.7, 796.6, PathPointTypeBezier, 0, 0}, /*9*/ 232 {290.1, 789.8, PathPointTypeBezier, 0, 0}, /*10*/ 233 {409.9, 110.2, PathPointTypeLine, 0, 0}, /*11*/ 234 {544.0, 156.5, PathPointTypeBezier, 0, 0}, /*12*/ 235 {625.8, 346.2, PathPointTypeBezier, 0, 0}, /*13*/ 236 {592.7, 533.9, PathPointTypeBezier, 0, 0}, /*14*/ 237 {592.5, 535.3, PathPointTypeBezier, 0, 0}, /*15*/ 238 {592.2, 536.7, PathPointTypeBezier, 0, 0}, /*16*/ 239 {592.0, 538.1, PathPointTypeBezier, 0, 0}, /*17*/ 240 {409.9, 789.8, PathPointTypeLine, 0, 0}, /*18*/ 241 {544.0, 743.5, PathPointTypeBezier, 0, 0}, /*19*/ 242 {625.8, 553.8, PathPointTypeBezier, 0, 0}, /*20*/ 243 {592.7, 366.1, PathPointTypeBezier, 0, 0}, /*21*/ 244 {592.5, 364.7, PathPointTypeBezier, 0, 0}, /*22*/ 245 {592.2, 363.3, PathPointTypeBezier, 0, 0}, /*23*/ 246 {592.0, 361.9, PathPointTypeBezier, 0, 0}, /*24*/ 247 {540.4, 676.9, PathPointTypeLine, 0, 0}, /*25*/ 248 {629.9, 529.7, PathPointTypeBezier, 0, 0}, /*26*/ 249 {617.2, 308.8, PathPointTypeBezier, 0, 0}, /*27*/ 250 {512.1, 183.5, PathPointTypeBezier, 0, 0}, /*28*/ 251 {406.9, 58.2, PathPointTypeBezier, 0, 0}, /*29*/ 252 {249.1, 75.9, PathPointTypeBezier, 0, 0}, /*30*/ 253 {159.6, 223.1, PathPointTypeBezier, 0, 0}, /*31*/ 254 {70.1, 370.3, PathPointTypeBezier, 0, 0}, /*32*/ 255 {82.8, 591.2, PathPointTypeBezier, 0, 0}, /*33*/ 256 {187.9, 716.5, PathPointTypeBezier, 0, 0}, /*34*/ 257 {293.1, 841.8, PathPointTypeBezier, 0, 0}, /*35*/ 258 {450.9, 824.1, PathPointTypeBezier, 0, 0}, /*36*/ 259 {540.4, 676.9, PathPointTypeBezier | PathPointTypeCloseSubpath, 0, 1} /*37*/ 260 }; 261 262 static void test_arc(void) 263 { 264 GpStatus status; 265 GpPath* path; 266 267 GdipCreatePath(FillModeAlternate, &path); 268 /* Exactly 90 degrees */ 269 status = GdipAddPathArc(path, 100.0, 100.0, 500.0, 700.0, 0.0, 90.0); 270 expect(Ok, status); 271 /* Over 90 degrees */ 272 status = GdipAddPathArc(path, 100.0, 100.0, 500.0, 700.0, 0.0, 100.0); 273 expect(Ok, status); 274 /* Negative start angle */ 275 status = GdipAddPathArc(path, 100.0, 100.0, 500.0, 700.0, -80.0, 100.0); 276 expect(Ok, status); 277 /* Negative sweep angle */ 278 status = GdipAddPathArc(path, 100.0, 100.0, 500.0, 700.0, 80.0, -100.0); 279 expect(Ok, status); 280 /* More than a full revolution */ 281 status = GdipAddPathArc(path, 100.0, 100.0, 500.0, 700.0, 50.0, -400.0); 282 expect(Ok, status); 283 /* 0 sweep angle */ 284 status = GdipAddPathArc(path, 100.0, 100.0, 500.0, 700.0, 50.0, 0.0); 285 expect(Ok, status); 286 287 ok_path(path, arc_path, sizeof(arc_path)/sizeof(path_test_t), FALSE); 288 289 GdipDeletePath(path); 290 } 291 292 static void test_worldbounds(void) 293 { 294 GpStatus status; 295 GpPath *path; 296 GpPen *pen; 297 GpMatrix *matrix; 298 GpRectF bounds; 299 GpPointF line2_points[10]; 300 int i; 301 302 for(i = 0; i < 10; i ++){ 303 line2_points[i].X = 200.0 + i * 50.0 * (i % 2); 304 line2_points[i].Y = 200.0 + i * 50.0 * !(i % 2); 305 } 306 GdipCreatePen1((ARGB)0xdeadbeef, 20.0, UnitWorld, &pen); 307 GdipSetPenEndCap(pen, LineCapSquareAnchor); 308 GdipCreateMatrix2(1.5, 0.0, 1.0, 1.2, 10.4, 10.2, &matrix); 309 310 GdipCreatePath(FillModeAlternate, &path); 311 GdipAddPathArc(path, 100.0, 100.0, 500.0, 700.0, 0.0, 100.0); 312 GdipAddPathLine2(path, &(line2_points[0]), 10); 313 status = GdipGetPathWorldBounds(path, &bounds, NULL, NULL); 314 expect(Ok, status); 315 GdipDeletePath(path); 316 317 expectf(200.0, bounds.X); 318 expectf(200.0, bounds.Y); 319 expectf(450.0, bounds.Width); 320 expectf(600.0, bounds.Height); 321 322 GdipCreatePath(FillModeAlternate, &path); 323 GdipAddPathArc(path, 100.0, 100.0, 500.0, 700.0, 0.0, 100.0); 324 GdipAddPathLine2(path, &(line2_points[0]), 10); 325 status = GdipGetPathWorldBounds(path, &bounds, matrix, NULL); 326 expect(Ok, status); 327 GdipDeletePath(path); 328 329 expectf(510.4, bounds.X); 330 expectf(250.2, bounds.Y); 331 expectf(1275.0, bounds.Width); 332 expectf(720.0, bounds.Height); 333 334 GdipCreatePath(FillModeAlternate, &path); 335 GdipAddPathArc(path, 100.0, 100.0, 500.0, 700.0, 0.0, 100.0); 336 GdipAddPathLine2(path, &(line2_points[0]), 10); 337 status = GdipGetPathWorldBounds(path, &bounds, NULL, pen); 338 expect(Ok, status); 339 GdipDeletePath(path); 340 341 expectf(100.0, bounds.X); 342 expectf(100.0, bounds.Y); 343 expectf(650.0, bounds.Width); 344 expectf(800.0, bounds.Height); 345 346 GdipCreatePath(FillModeAlternate, &path); 347 GdipAddPathLine2(path, &(line2_points[0]), 2); 348 status = GdipGetPathWorldBounds(path, &bounds, NULL, pen); 349 expect(Ok, status); 350 GdipDeletePath(path); 351 352 expectf(156.0, bounds.X); 353 expectf(156.0, bounds.Y); 354 expectf(138.0, bounds.Width); 355 expectf(88.0, bounds.Height); 356 357 line2_points[2].X = 2 * line2_points[1].X - line2_points[0].X; 358 line2_points[2].Y = 2 * line2_points[1].Y - line2_points[0].Y; 359 360 GdipCreatePath(FillModeAlternate, &path); 361 GdipAddPathLine2(path, &(line2_points[0]), 3); 362 status = GdipGetPathWorldBounds(path, &bounds, NULL, pen); 363 expect(Ok, status); 364 GdipDeletePath(path); 365 366 expectf(100.0, bounds.X); 367 expectf(100.0, bounds.Y); 368 expectf(300.0, bounds.Width); 369 expectf(200.0, bounds.Height); 370 371 GdipCreatePath(FillModeAlternate, &path); 372 GdipAddPathArc(path, 100.0, 100.0, 500.0, 700.0, 45.0, 20.0); 373 status = GdipGetPathWorldBounds(path, &bounds, NULL, pen); 374 expect(Ok, status); 375 GdipDeletePath(path); 376 377 expectf(386.7, bounds.X); 378 expectf(553.4, bounds.Y); 379 expectf(266.8, bounds.Width); 380 expectf(289.6, bounds.Height); 381 382 GdipCreatePath(FillModeAlternate, &path); 383 status = GdipGetPathWorldBounds(path, &bounds, matrix, pen); 384 expect(Ok, status); 385 GdipDeletePath(path); 386 387 expectf(0.0, bounds.X); 388 expectf(0.0, bounds.Y); 389 expectf(0.0, bounds.Width); 390 expectf(0.0, bounds.Height); 391 392 GdipCreatePath(FillModeAlternate, &path); 393 GdipAddPathLine2(path, &(line2_points[0]), 2); 394 status = GdipGetPathWorldBounds(path, &bounds, matrix, pen); 395 expect(Ok, status); 396 GdipDeletePath(path); 397 398 todo_wine{ 399 expectf(427.9, bounds.X); 400 expectf(167.7, bounds.Y); 401 expectf(239.9, bounds.Width); 402 expectf(164.9, bounds.Height); 403 } 404 405 GdipDeleteMatrix(matrix); 406 GdipCreateMatrix2(0.9, -0.5, -0.5, -1.2, 10.4, 10.2, &matrix); 407 GdipCreatePath(FillModeAlternate, &path); 408 GdipAddPathArc(path, 100.0, 100.0, 500.0, 700.0, 0.0, 100.0); 409 GdipAddPathLine2(path, &(line2_points[0]), 10); 410 status = GdipGetPathWorldBounds(path, &bounds, matrix, NULL); 411 expect(Ok, status); 412 GdipDeletePath(path); 413 GdipDeleteMatrix(matrix); 414 415 expectf(-209.6, bounds.X); 416 expectf(-1274.8, bounds.Y); 417 expectf(705.0, bounds.Width); 418 expectf(945.0, bounds.Height); 419 420 GdipDeletePen(pen); 421 } 422 423 static path_test_t pathpath_path[] = { 424 {600.00, 450.00, PathPointTypeStart, 0, 0}, /*0*/ 425 {600.00, 643.30, PathPointTypeBezier, 0, 0}, /*1*/ 426 {488.07, 800.00, PathPointTypeBezier, 0, 0}, /*2*/ 427 {350.00, 800.00, PathPointTypeBezier, 0, 0}, /*3*/ 428 {319.61, 797.40, PathPointTypeStart, 0, 0}, /*4*/ 429 {182.56, 773.90, PathPointTypeBezier, 0, 0}, /*5*/ 430 {85.07, 599.31, PathPointTypeBezier, 0, 0}, /*6*/ 431 {101.85, 407.45, PathPointTypeBezier, 0, 0}, /*7*/ 432 {102.54, 399.66, PathPointTypeBezier, 0, 0}, /*8*/ 433 {103.40, 391.91, PathPointTypeBezier, 0, 0}, /*9*/ 434 {104.46, 384.21, PathPointTypeBezier, 0, 0}, /*10*/ 435 {409.92, 110.20, PathPointTypeLine, 0, 0}, /*11*/ 436 {543.96, 156.53, PathPointTypeBezier, 0, 0}, /*12*/ 437 {625.80, 346.22, PathPointTypeBezier, 0, 0}, /*13*/ 438 {592.71, 533.88, PathPointTypeBezier, 0, 0}, /*14*/ 439 {592.47, 535.28, PathPointTypeBezier, 0, 0}, /*15*/ 440 {592.22, 536.67, PathPointTypeBezier, 0, 0}, /*16*/ 441 {591.96, 538.06, PathPointTypeBezier, 0, 0}, /*17*/ 442 {319.61, 797.40, PathPointTypeLine, 0, 0}, /*18*/ 443 {182.56, 773.90, PathPointTypeBezier, 0, 0}, /*19*/ 444 {85.07, 599.31, PathPointTypeBezier, 0, 0}, /*20*/ 445 {101.85, 407.45, PathPointTypeBezier, 0, 0}, /*21*/ 446 {102.54, 399.66, PathPointTypeBezier, 0, 0}, /*22*/ 447 {103.40, 391.91, PathPointTypeBezier, 0, 0}, /*23*/ 448 {104.46, 384.21, PathPointTypeBezier, 0, 0} /*24*/ 449 }; 450 451 static void test_pathpath(void) 452 { 453 GpStatus status; 454 GpPath* path1, *path2; 455 456 GdipCreatePath(FillModeAlternate, &path2); 457 GdipAddPathArc(path2, 100.0, 100.0, 500.0, 700.0, 95.0, 100.0); 458 459 GdipCreatePath(FillModeAlternate, &path1); 460 GdipAddPathArc(path1, 100.0, 100.0, 500.0, 700.0, 0.0, 90.0); 461 status = GdipAddPathPath(path1, path2, FALSE); 462 expect(Ok, status); 463 GdipAddPathArc(path1, 100.0, 100.0, 500.0, 700.0, -80.0, 100.0); 464 status = GdipAddPathPath(path1, path2, TRUE); 465 expect(Ok, status); 466 467 ok_path(path1, pathpath_path, sizeof(pathpath_path)/sizeof(path_test_t), FALSE); 468 469 GdipDeletePath(path1); 470 GdipDeletePath(path2); 471 } 472 473 static path_test_t ellipse_path[] = { 474 {30.00, 125.25, PathPointTypeStart, 0, 0}, /*0*/ 475 {30.00, 139.20, PathPointTypeBezier, 0, 0}, /*1*/ 476 {25.52, 150.50, PathPointTypeBezier, 0, 0}, /*2*/ 477 {20.00, 150.50, PathPointTypeBezier, 0, 0}, /*3*/ 478 {14.48, 150.50, PathPointTypeBezier, 0, 0}, /*4*/ 479 {10.00, 139.20, PathPointTypeBezier, 0, 0}, /*5*/ 480 {10.00, 125.25, PathPointTypeBezier, 0, 0}, /*6*/ 481 {10.00, 111.30, PathPointTypeBezier, 0, 0}, /*7*/ 482 {14.48, 100.00, PathPointTypeBezier, 0, 0}, /*8*/ 483 {20.00, 100.00, PathPointTypeBezier, 0, 0}, /*9*/ 484 {25.52, 100.00, PathPointTypeBezier, 0, 0}, /*10*/ 485 {30.00, 111.30, PathPointTypeBezier, 0, 0}, /*11*/ 486 {30.00, 125.25, PathPointTypeBezier | PathPointTypeCloseSubpath, 0, 0}, /*12*/ 487 {7.00, 11.00, PathPointTypeStart, 0, 0}, /*13*/ 488 {13.00, 17.00, PathPointTypeLine, 0, 0}, /*14*/ 489 {5.00, 195.00, PathPointTypeStart, 0, 0}, /*15*/ 490 {5.00, 192.24, PathPointTypeBezier, 0, 0}, /*16*/ 491 {6.12, 190.00, PathPointTypeBezier, 0, 0}, /*17*/ 492 {7.50, 190.00, PathPointTypeBezier, 0, 0}, /*18*/ 493 {8.88, 190.00, PathPointTypeBezier, 0, 0}, /*19*/ 494 {10.00, 192.24, PathPointTypeBezier, 0, 0}, /*20*/ 495 {10.00, 195.00, PathPointTypeBezier, 0, 0}, /*21*/ 496 {10.00, 197.76, PathPointTypeBezier, 0, 0}, /*22*/ 497 {8.88, 200.00, PathPointTypeBezier, 0, 0}, /*23*/ 498 {7.50, 200.00, PathPointTypeBezier, 0, 0}, /*24*/ 499 {6.12, 200.00, PathPointTypeBezier, 0, 0}, /*25*/ 500 {5.00, 197.76, PathPointTypeBezier, 0, 0}, /*26*/ 501 {5.00, 195.00, PathPointTypeBezier | PathPointTypeCloseSubpath, 0, 0}, /*27*/ 502 {10.00, 300.50, PathPointTypeStart, 0, 0}, /*28*/ 503 {10.00, 300.78, PathPointTypeBezier, 0, 0}, /*29*/ 504 {10.00, 301.00, PathPointTypeBezier, 0, 0}, /*30*/ 505 {10.00, 301.00, PathPointTypeBezier, 0, 0}, /*31*/ 506 {10.00, 301.00, PathPointTypeBezier, 0, 0}, /*32*/ 507 {10.00, 300.78, PathPointTypeBezier, 0, 0}, /*33*/ 508 {10.00, 300.50, PathPointTypeBezier, 0, 0}, /*34*/ 509 {10.00, 300.22, PathPointTypeBezier, 0, 0}, /*35*/ 510 {10.00, 300.00, PathPointTypeBezier, 0, 0}, /*36*/ 511 {10.00, 300.00, PathPointTypeBezier, 0, 0}, /*37*/ 512 {10.00, 300.00, PathPointTypeBezier, 0, 0}, /*38*/ 513 {10.00, 300.22, PathPointTypeBezier, 0, 0}, /*39*/ 514 {10.00, 300.50, PathPointTypeBezier | PathPointTypeCloseSubpath, 0, 0} /*40*/ 515 }; 516 517 static void test_ellipse(void) 518 { 519 GpStatus status; 520 GpPath *path; 521 GpPointF points[2]; 522 523 points[0].X = 7.0; 524 points[0].Y = 11.0; 525 points[1].X = 13.0; 526 points[1].Y = 17.0; 527 528 GdipCreatePath(FillModeAlternate, &path); 529 status = GdipAddPathEllipse(path, 10.0, 100.0, 20.0, 50.5); 530 expect(Ok, status); 531 GdipAddPathLine2(path, points, 2); 532 status = GdipAddPathEllipse(path, 10.0, 200.0, -5.0, -10.0); 533 expect(Ok, status); 534 GdipClosePathFigure(path); 535 status = GdipAddPathEllipse(path, 10.0, 300.0, 0.0, 1.0); 536 expect(Ok, status); 537 538 ok_path(path, ellipse_path, sizeof(ellipse_path)/sizeof(path_test_t), FALSE); 539 540 GdipDeletePath(path); 541 } 542 543 static path_test_t linei_path[] = { 544 {5.00, 5.00, PathPointTypeStart, 0, 0}, /*0*/ 545 {6.00, 8.00, PathPointTypeLine, 0, 0}, /*1*/ 546 {409.92, 110.20, PathPointTypeLine, 0, 0}, /*2*/ 547 {543.96, 156.53, PathPointTypeBezier, 0, 0}, /*3*/ 548 {625.80, 346.22, PathPointTypeBezier, 0, 0}, /*4*/ 549 {592.71, 533.88, PathPointTypeBezier, 0, 0}, /*5*/ 550 {592.47, 535.28, PathPointTypeBezier, 0, 0}, /*6*/ 551 {592.22, 536.67, PathPointTypeBezier, 0, 0}, /*7*/ 552 {591.96, 538.06, PathPointTypeBezier, 0, 0}, /*8*/ 553 {15.00, 15.00, PathPointTypeLine, 0, 0}, /*9*/ 554 {26.00, 28.00, PathPointTypeLine | PathPointTypeCloseSubpath, 0, 0}, /*10*/ 555 {35.00, 35.00, PathPointTypeStart, 0, 0}, /*11*/ 556 {36.00, 38.00, PathPointTypeLine, 0, 0} /*12*/ 557 }; 558 559 static void test_linei(void) 560 { 561 GpStatus status; 562 GpPath *path; 563 GpPointF points[2]; 564 565 points[0].X = 7.0; 566 points[0].Y = 11.0; 567 points[1].X = 13.0; 568 points[1].Y = 17.0; 569 570 GdipCreatePath(FillModeAlternate, &path); 571 status = GdipAddPathLineI(path, 5.0, 5.0, 6.0, 8.0); 572 expect(Ok, status); 573 GdipAddPathArc(path, 100.0, 100.0, 500.0, 700.0, -80.0, 100.0); 574 status = GdipAddPathLineI(path, 15.0, 15.0, 26.0, 28.0); 575 expect(Ok, status); 576 GdipClosePathFigure(path); 577 status = GdipAddPathLineI(path, 35.0, 35.0, 36.0, 38.0); 578 expect(Ok, status); 579 580 ok_path(path, linei_path, sizeof(linei_path)/sizeof(path_test_t), FALSE); 581 582 GdipDeletePath(path); 583 } 584 585 static path_test_t poly_path[] = { 586 {5.00, 5.00, PathPointTypeStart, 0, 0}, /*1*/ 587 {6.00, 8.00, PathPointTypeLine, 0, 0}, /*2*/ 588 {0.00, 0.00, PathPointTypeStart, 0, 0}, /*3*/ 589 {10.00, 10.00, PathPointTypeLine, 0, 0}, /*4*/ 590 {10.00, 20.00, PathPointTypeLine, 0, 0}, /*5*/ 591 {30.00, 10.00, PathPointTypeLine, 0, 0}, /*6*/ 592 {20.00, 0.00, PathPointTypeLine | PathPointTypeCloseSubpath, 0, 0}, /*7*/ 593 }; 594 595 static void test_polygon(void) 596 { 597 GpStatus status; 598 GpPath *path; 599 GpPointF points[5]; 600 601 points[0].X = 0.0; 602 points[0].Y = 0.0; 603 points[1].X = 10.0; 604 points[1].Y = 10.0; 605 points[2].X = 10.0; 606 points[2].Y = 20.0; 607 points[3].X = 30.0; 608 points[3].Y = 10.0; 609 points[4].X = 20.0; 610 points[4].Y = 0.0; 611 612 GdipCreatePath(FillModeAlternate, &path); 613 614 /* NULL args */ 615 status = GdipAddPathPolygon(NULL, points, 5); 616 expect(InvalidParameter, status); 617 status = GdipAddPathPolygon(path, NULL, 5); 618 expect(InvalidParameter, status); 619 /* Polygon should have 3 points at least */ 620 status = GdipAddPathPolygon(path, points, 2); 621 expect(InvalidParameter, status); 622 623 /* to test how it prolongs not empty path */ 624 status = GdipAddPathLine(path, 5.0, 5.0, 6.0, 8.0); 625 expect(Ok, status); 626 status = GdipAddPathPolygon(path, points, 5); 627 expect(Ok, status); 628 /* check resulting path */ 629 ok_path(path, poly_path, sizeof(poly_path)/sizeof(path_test_t), FALSE); 630 631 GdipDeletePath(path); 632 } 633 634 static path_test_t rect_path[] = { 635 {5.0, 5.0, PathPointTypeStart, 0, 0}, /*0*/ 636 {105.0, 5.0, PathPointTypeLine, 0, 0}, /*1*/ 637 {105.0, 55.0, PathPointTypeLine, 0, 0}, /*2*/ 638 {5.0, 55.0, PathPointTypeLine | PathPointTypeCloseSubpath, 0, 0}, /*3*/ 639 640 {100.0, 50.0, PathPointTypeStart, 0, 0}, /*4*/ 641 {220.0, 50.0, PathPointTypeLine, 0, 0}, /*5*/ 642 {220.0, 80.0, PathPointTypeLine, 0, 0}, /*6*/ 643 {100.0, 80.0, PathPointTypeLine | PathPointTypeCloseSubpath, 0, 0} /*7*/ 644 }; 645 646 static void test_rect(void) 647 { 648 GpStatus status; 649 GpPath *path; 650 GpRectF rects[2]; 651 652 GdipCreatePath(FillModeAlternate, &path); 653 status = GdipAddPathRectangle(path, 5.0, 5.0, 100.0, 50.0); 654 expect(Ok, status); 655 status = GdipAddPathRectangle(path, 100.0, 50.0, 120.0, 30.0); 656 expect(Ok, status); 657 658 ok_path(path, rect_path, sizeof(rect_path)/sizeof(path_test_t), FALSE); 659 660 GdipDeletePath(path); 661 662 GdipCreatePath(FillModeAlternate, &path); 663 664 rects[0].X = 5.0; 665 rects[0].Y = 5.0; 666 rects[0].Width = 100.0; 667 rects[0].Height = 50.0; 668 rects[1].X = 100.0; 669 rects[1].Y = 50.0; 670 rects[1].Width = 120.0; 671 rects[1].Height = 30.0; 672 673 status = GdipAddPathRectangles(path, (GDIPCONST GpRectF*)&rects, 2); 674 expect(Ok, status); 675 676 ok_path(path, rect_path, sizeof(rect_path)/sizeof(path_test_t), FALSE); 677 678 GdipDeletePath(path); 679 } 680 681 static void test_lastpoint(void) 682 { 683 GpStatus status; 684 GpPath *path; 685 GpPointF ptf; 686 687 GdipCreatePath(FillModeAlternate, &path); 688 status = GdipAddPathRectangle(path, 5.0, 5.0, 100.0, 50.0); 689 expect(Ok, status); 690 691 /* invalid args */ 692 status = GdipGetPathLastPoint(NULL, &ptf); 693 expect(InvalidParameter, status); 694 status = GdipGetPathLastPoint(path, NULL); 695 expect(InvalidParameter, status); 696 status = GdipGetPathLastPoint(NULL, NULL); 697 expect(InvalidParameter, status); 698 699 status = GdipGetPathLastPoint(path, &ptf); 700 expect(Ok, status); 701 expect(TRUE, (ptf.X == 5.0) && (ptf.Y == 55.0)); 702 703 GdipDeletePath(path); 704 } 705 706 static path_test_t addcurve_path[] = { 707 {0.0, 0.0, PathPointTypeStart, 0, 0}, /*0*/ 708 {3.3, 3.3, PathPointTypeBezier, 0, 0}, /*1*/ 709 {6.7, 3.3, PathPointTypeBezier, 0, 0}, /*2*/ 710 {10.0, 10.0, PathPointTypeBezier, 0, 0}, /*3*/ 711 {13.3, 16.7, PathPointTypeBezier, 0, 0}, /*4*/ 712 {3.3, 20.0, PathPointTypeBezier, 0, 0}, /*5*/ 713 {10.0, 20.0, PathPointTypeBezier, 0, 0}, /*6*/ 714 {16.7, 20.0, PathPointTypeBezier, 0, 0}, /*7*/ 715 {23.3, 13.3, PathPointTypeBezier, 0, 0}, /*8*/ 716 {30.0, 10.0, PathPointTypeBezier, 0, 0} /*9*/ 717 }; 718 static path_test_t addcurve_path2[] = { 719 {100.0,120.0,PathPointTypeStart, 0, 0}, /*0*/ 720 {123.0,10.0, PathPointTypeLine, 0, 0}, /*1*/ 721 {0.0, 0.0, PathPointTypeLine, 0, 0}, /*2*/ 722 {3.3, 3.3, PathPointTypeBezier, 0, 0}, /*3*/ 723 {6.7, 3.3, PathPointTypeBezier, 0, 0}, /*4*/ 724 {10.0, 10.0, PathPointTypeBezier, 0, 0}, /*5*/ 725 {13.3, 16.7, PathPointTypeBezier, 0, 0}, /*6*/ 726 {3.3, 20.0, PathPointTypeBezier, 0, 0}, /*7*/ 727 {10.0, 20.0, PathPointTypeBezier, 0, 0}, /*8*/ 728 {16.7, 20.0, PathPointTypeBezier, 0, 0}, /*9*/ 729 {23.3, 13.3, PathPointTypeBezier, 0, 0}, /*10*/ 730 {30.0, 10.0, PathPointTypeBezier, 0, 0} /*11*/ 731 }; 732 static path_test_t addcurve_path3[] = { 733 {10.0, 10.0, PathPointTypeStart, 0, 0}, /*0*/ 734 {13.3, 16.7, PathPointTypeBezier, 0, 1}, /*1*/ 735 {3.3, 20.0, PathPointTypeBezier, 0, 0}, /*2*/ 736 {10.0, 20.0, PathPointTypeBezier, 0, 0}, /*3*/ 737 {16.7, 20.0, PathPointTypeBezier, 0, 0}, /*4*/ 738 {23.3, 13.3, PathPointTypeBezier, 0, 0}, /*5*/ 739 {30.0, 10.0, PathPointTypeBezier, 0, 0} /*6*/ 740 }; 741 static void test_addcurve(void) 742 { 743 GpStatus status; 744 GpPath *path; 745 GpPointF points[4]; 746 747 points[0].X = 0.0; 748 points[0].Y = 0.0; 749 points[1].X = 10.0; 750 points[1].Y = 10.0; 751 points[2].X = 10.0; 752 points[2].Y = 20.0; 753 points[3].X = 30.0; 754 points[3].Y = 10.0; 755 756 GdipCreatePath(FillModeAlternate, &path); 757 758 /* NULL args */ 759 status = GdipAddPathCurve2(NULL, NULL, 0, 0.0); 760 expect(InvalidParameter, status); 761 status = GdipAddPathCurve2(path, NULL, 0, 0.0); 762 expect(InvalidParameter, status); 763 status = GdipAddPathCurve2(path, points, -1, 0.0); 764 expect(InvalidParameter, status); 765 status = GdipAddPathCurve2(path, points, 1, 1.0); 766 expect(InvalidParameter, status); 767 768 /* add to empty path */ 769 status = GdipAddPathCurve2(path, points, 4, 1.0); 770 expect(Ok, status); 771 ok_path(path, addcurve_path, sizeof(addcurve_path)/sizeof(path_test_t), FALSE); 772 GdipDeletePath(path); 773 774 /* add to notempty path and opened figure */ 775 GdipCreatePath(FillModeAlternate, &path); 776 GdipAddPathLine(path, 100.0, 120.0, 123.0, 10.0); 777 status = GdipAddPathCurve2(path, points, 4, 1.0); 778 expect(Ok, status); 779 ok_path(path, addcurve_path2, sizeof(addcurve_path2)/sizeof(path_test_t), FALSE); 780 781 /* NULL args */ 782 GdipResetPath(path); 783 status = GdipAddPathCurve3(NULL, NULL, 0, 0, 0, 0.0); 784 expect(InvalidParameter, status); 785 status = GdipAddPathCurve3(path, NULL, 0, 0, 0, 0.0); 786 expect(InvalidParameter, status); 787 /* wrong count, offset.. */ 788 status = GdipAddPathCurve3(path, points, 0, 0, 0, 0.0); 789 expect(InvalidParameter, status); 790 status = GdipAddPathCurve3(path, points, 4, 0, 0, 0.0); 791 expect(InvalidParameter, status); 792 status = GdipAddPathCurve3(path, points, 4, 0, 4, 0.0); 793 expect(InvalidParameter, status); 794 status = GdipAddPathCurve3(path, points, 4, 1, 3, 0.0); 795 expect(InvalidParameter, status); 796 status = GdipAddPathCurve3(path, points, 4, 1, 0, 0.0); 797 expect(InvalidParameter, status); 798 status = GdipAddPathCurve3(path, points, 4, 3, 1, 0.0); 799 expect(InvalidParameter, status); 800 801 /* use all points */ 802 status = GdipAddPathCurve3(path, points, 4, 0, 3, 1.0); 803 expect(Ok, status); 804 ok_path(path, addcurve_path, sizeof(addcurve_path)/sizeof(path_test_t), FALSE); 805 GdipResetPath(path); 806 807 status = GdipAddPathCurve3(path, points, 4, 1, 2, 1.0); 808 expect(Ok, status); 809 ok_path(path, addcurve_path3, sizeof(addcurve_path3)/sizeof(path_test_t), FALSE); 810 811 GdipDeletePath(path); 812 } 813 814 static path_test_t addclosedcurve_path[] = { 815 {0.0, 0.0, PathPointTypeStart, 0, 0}, /*0*/ 816 {-6.7, 0.0, PathPointTypeBezier, 0, 0}, /*1*/ 817 {6.7, 3.3, PathPointTypeBezier, 0, 0}, /*2*/ 818 {10.0, 10.0, PathPointTypeBezier, 0, 0}, /*3*/ 819 {13.3, 16.7, PathPointTypeBezier, 0, 0}, /*4*/ 820 {3.3, 20.0, PathPointTypeBezier, 0, 0}, /*5*/ 821 {10.0, 20.0, PathPointTypeBezier, 0, 0}, /*6*/ 822 {16.7, 20.0, PathPointTypeBezier, 0, 0}, /*7*/ 823 {33.3, 16.7, PathPointTypeBezier, 0, 0}, /*8*/ 824 {30.0, 10.0, PathPointTypeBezier, 0, 0}, /*9*/ 825 {26.7, 3.3, PathPointTypeBezier, 0, 0}, /*10*/ 826 {6.7, 0.0, PathPointTypeBezier, 0, 0}, /*11*/ 827 {0.0, 0.0, PathPointTypeBezier | PathPointTypeCloseSubpath, 0, 0} /*12*/ 828 }; 829 static void test_addclosedcurve(void) 830 { 831 GpStatus status; 832 GpPath *path; 833 GpPointF points[4]; 834 835 points[0].X = 0.0; 836 points[0].Y = 0.0; 837 points[1].X = 10.0; 838 points[1].Y = 10.0; 839 points[2].X = 10.0; 840 points[2].Y = 20.0; 841 points[3].X = 30.0; 842 points[3].Y = 10.0; 843 844 GdipCreatePath(FillModeAlternate, &path); 845 846 /* NULL args */ 847 status = GdipAddPathClosedCurve2(NULL, NULL, 0, 0.0); 848 expect(InvalidParameter, status); 849 status = GdipAddPathClosedCurve2(path, NULL, 0, 0.0); 850 expect(InvalidParameter, status); 851 status = GdipAddPathClosedCurve2(path, points, -1, 0.0); 852 expect(InvalidParameter, status); 853 status = GdipAddPathClosedCurve2(path, points, 1, 1.0); 854 expect(InvalidParameter, status); 855 856 /* add to empty path */ 857 status = GdipAddPathClosedCurve2(path, points, 4, 1.0); 858 expect(Ok, status); 859 ok_path(path, addclosedcurve_path, sizeof(addclosedcurve_path)/sizeof(path_test_t), FALSE); 860 GdipDeletePath(path); 861 } 862 863 static path_test_t reverse_path[] = { 864 {0.0, 20.0, PathPointTypeStart, 0, 0}, /*0*/ 865 {25.0, 25.0, PathPointTypeLine, 0, 0}, /*1*/ 866 {0.0, 30.0, PathPointTypeLine, 0, 0}, /*2*/ 867 {15.0, 35.0, PathPointTypeStart, 0, 0}, /*3*/ 868 {0.0, 40.0, PathPointTypeLine, 0, 0}, /*4*/ 869 {5.0, 45.0, PathPointTypeLine, 0, 0}, /*5*/ 870 {0.0, 50.0, PathPointTypeLine | PathPointTypeCloseSubpath, 0, 0} /*6*/ 871 }; 872 873 static void test_reverse(void) 874 { 875 GpStatus status; 876 GpPath *path; 877 GpPointF pts[7]; 878 INT i; 879 880 for(i = 0; i < 7; i++){ 881 pts[i].X = i * 5.0 * (REAL)(i % 2); 882 pts[i].Y = 50.0 - i * 5.0; 883 } 884 885 GdipCreatePath(FillModeAlternate, &path); 886 887 /* NULL argument */ 888 status = GdipReversePath(NULL); 889 expect(InvalidParameter, status); 890 891 /* empty path */ 892 status = GdipReversePath(path); 893 expect(Ok, status); 894 895 GdipAddPathLine2(path, pts, 4); 896 GdipClosePathFigure(path); 897 GdipAddPathLine2(path, &(pts[4]), 3); 898 899 status = GdipReversePath(path); 900 expect(Ok, status); 901 ok_path(path, reverse_path, sizeof(reverse_path)/sizeof(path_test_t), FALSE); 902 903 GdipDeletePath(path); 904 } 905 906 static path_test_t addpie_path[] = { 907 {50.0, 25.0, PathPointTypeStart, 0, 0}, /*0*/ 908 {97.2, 33.3, PathPointTypeLine, 0, 0}, /*1*/ 909 {91.8, 40.9, PathPointTypeBezier,0, 0}, /*2*/ 910 {79.4, 46.8, PathPointTypeBezier,0, 0}, /*3*/ 911 {63.9, 49.0, PathPointTypeBezier | PathPointTypeCloseSubpath, 0, 0} /*4*/ 912 }; 913 static path_test_t addpie_path2[] = { 914 {0.0, 30.0, PathPointTypeStart | PathPointTypeCloseSubpath, 0, 0} /*0*/ 915 }; 916 static path_test_t addpie_path3[] = { 917 {30.0, 0.0, PathPointTypeStart | PathPointTypeCloseSubpath, 0, 0} /*0*/ 918 }; 919 static void test_addpie(void) 920 { 921 GpStatus status; 922 GpPath *path; 923 924 GdipCreatePath(FillModeAlternate, &path); 925 926 /* NULL argument */ 927 status = GdipAddPathPie(NULL, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0); 928 expect(InvalidParameter, status); 929 930 status = GdipAddPathPie(path, 0.0, 0.0, 100.0, 50.0, 10.0, 50.0); 931 expect(Ok, status); 932 ok_path(path, addpie_path, sizeof(addpie_path)/sizeof(path_test_t), FALSE); 933 status = GdipResetPath(path); 934 expect(Ok, status); 935 936 /* zero width base ellipse */ 937 status = GdipAddPathPie(path, 0.0, 0.0, 0.0, 60.0, -90.0, 24.0); 938 expect(InvalidParameter, status); 939 ok_path(path, addpie_path2, sizeof(addpie_path2)/sizeof(path_test_t), FALSE); 940 status = GdipResetPath(path); 941 expect(Ok, status); 942 943 /* zero height base ellipse */ 944 status = GdipAddPathPie(path, 0.0, 0.0, 60.0, 0.0 , -90.0, 24.0); 945 expect(InvalidParameter, status); 946 ok_path(path, addpie_path3, sizeof(addpie_path3)/sizeof(path_test_t), FALSE); 947 948 GdipDeletePath(path); 949 } 950 951 static path_test_t flattenellipse_path[] = { 952 {100.0, 25.0,PathPointTypeStart, 0, 0}, /*0*/ 953 {99.0, 30.0, PathPointTypeLine, 0, 0}, /*1*/ 954 {96.0, 34.8, PathPointTypeLine, 0, 0}, /*2*/ 955 {91.5, 39.0, PathPointTypeLine, 0, 0}, /*3*/ 956 {85.5, 42.8, PathPointTypeLine, 0, 0}, /*4*/ 957 {69.5, 48.0, PathPointTypeLine, 0, 1}, /*5*/ 958 {50.0, 50.0, PathPointTypeLine, 0, 1}, /*6*/ 959 {30.5, 48.0, PathPointTypeLine, 0, 1}, /*7*/ 960 {14.8, 42.8, PathPointTypeLine, 0, 1}, /*8*/ 961 {8.5, 39.0, PathPointTypeLine, 0, 1}, /*9*/ 962 {4.0, 34.8, PathPointTypeLine, 0, 1}, /*10*/ 963 {1.0, 30.0, PathPointTypeLine, 0, 1}, /*11*/ 964 {0.0, 25.0, PathPointTypeLine, 0, 1}, /*12*/ 965 {1.0, 20.0, PathPointTypeLine, 0, 1}, /*13*/ 966 {4.0, 15.3, PathPointTypeLine, 0, 1}, /*14*/ 967 {8.5, 11.0, PathPointTypeLine, 0, 1}, /*15*/ 968 {14.8, 7.3, PathPointTypeLine, 0, 1}, /*16*/ 969 {30.5, 2.0, PathPointTypeLine, 0, 1}, /*17*/ 970 {50.0, 0.0, PathPointTypeLine, 0, 1}, /*18*/ 971 {69.5, 2.0, PathPointTypeLine, 0, 1}, /*19*/ 972 {85.5, 7.3, PathPointTypeLine, 0, 1}, /*20*/ 973 {91.5, 11.0, PathPointTypeLine, 0, 1}, /*21*/ 974 {96.0, 15.3, PathPointTypeLine, 0, 1}, /*22*/ 975 {99.0, 20.0, PathPointTypeLine, 0, 1}, /*23*/ 976 {100.0,25.0, PathPointTypeLine | PathPointTypeCloseSubpath, 0, 1} /*24*/ 977 }; 978 979 static path_test_t flattenline_path[] = { 980 {5.0, 10.0,PathPointTypeStart, 0, 0}, /*0*/ 981 {50.0, 100.0, PathPointTypeLine, 0, 0} /*1*/ 982 }; 983 984 static path_test_t flattenarc_path[] = { 985 {100.0, 25.0,PathPointTypeStart, 0, 0}, /*0*/ 986 {99.0, 30.0, PathPointTypeLine, 0, 0}, /*1*/ 987 {96.0, 34.8, PathPointTypeLine, 0, 0}, /*2*/ 988 {91.5, 39.0, PathPointTypeLine, 0, 0}, /*3*/ 989 {85.5, 42.8, PathPointTypeLine, 0, 0}, /*4*/ 990 {69.5, 48.0, PathPointTypeLine, 0, 1}, /*5*/ 991 {50.0, 50.0, PathPointTypeLine, 0, 1} /*6*/ 992 }; 993 994 static path_test_t flattenquater_path[] = { 995 {100.0, 50.0,PathPointTypeStart, 0, 0}, /*0*/ 996 {99.0, 60.0, PathPointTypeLine, 0, 0}, /*1*/ 997 {96.0, 69.5, PathPointTypeLine, 0, 0}, /*2*/ 998 {91.5, 78.0, PathPointTypeLine, 0, 0}, /*3*/ 999 {85.5, 85.5, PathPointTypeLine, 0, 0}, /*4*/ 1000 {78.0, 91.5, PathPointTypeLine, 0, 0}, /*5*/ 1001 {69.5, 96.0, PathPointTypeLine, 0, 0}, /*6*/ 1002 {60.0, 99.0, PathPointTypeLine, 0, 0}, /*7*/ 1003 {50.0, 100.0,PathPointTypeLine, 0, 0} /*8*/ 1004 }; 1005 1006 static void test_flatten(void) 1007 { 1008 GpStatus status; 1009 GpPath *path; 1010 GpMatrix *m; 1011 1012 status = GdipCreatePath(FillModeAlternate, &path); 1013 expect(Ok, status); 1014 status = GdipCreateMatrix(&m); 1015 expect(Ok, status); 1016 1017 /* NULL arguments */ 1018 status = GdipFlattenPath(NULL, NULL, 0.0); 1019 expect(InvalidParameter, status); 1020 status = GdipFlattenPath(NULL, m, 0.0); 1021 expect(InvalidParameter, status); 1022 1023 /* flatten empty path */ 1024 status = GdipFlattenPath(path, NULL, 1.0); 1025 expect(Ok, status); 1026 1027 status = GdipAddPathEllipse(path, 0.0, 0.0, 100.0, 50.0); 1028 expect(Ok, status); 1029 1030 status = GdipFlattenPath(path, NULL, 1.0); 1031 expect(Ok, status); 1032 ok_path(path, flattenellipse_path, sizeof(flattenellipse_path)/sizeof(path_test_t), TRUE); 1033 1034 status = GdipResetPath(path); 1035 expect(Ok, status); 1036 status = GdipAddPathLine(path, 5.0, 10.0, 50.0, 100.0); 1037 expect(Ok, status); 1038 status = GdipFlattenPath(path, NULL, 1.0); 1039 expect(Ok, status); 1040 ok_path(path, flattenline_path, sizeof(flattenline_path)/sizeof(path_test_t), FALSE); 1041 1042 status = GdipResetPath(path); 1043 expect(Ok, status); 1044 status = GdipAddPathArc(path, 0.0, 0.0, 100.0, 50.0, 0.0, 90.0); 1045 expect(Ok, status); 1046 status = GdipFlattenPath(path, NULL, 1.0); 1047 expect(Ok, status); 1048 ok_path(path, flattenarc_path, sizeof(flattenarc_path)/sizeof(path_test_t), TRUE); 1049 1050 /* easy case - quater of a full circle */ 1051 status = GdipResetPath(path); 1052 expect(Ok, status); 1053 status = GdipAddPathArc(path, 0.0, 0.0, 100.0, 100.0, 0.0, 90.0); 1054 expect(Ok, status); 1055 status = GdipFlattenPath(path, NULL, 1.0); 1056 expect(Ok, status); 1057 ok_path(path, flattenquater_path, sizeof(flattenquater_path)/sizeof(path_test_t), FALSE); 1058 1059 GdipDeleteMatrix(m); 1060 GdipDeletePath(path); 1061 } 1062 1063 static void test_isvisible(void) 1064 { 1065 GpPath *path; 1066 GpGraphics *graphics = NULL; 1067 HDC hdc = GetDC(0); 1068 BOOL result; 1069 GpStatus status; 1070 1071 status = GdipCreateFromHDC(hdc, &graphics); 1072 expect(Ok, status); 1073 status = GdipCreatePath(FillModeAlternate, &path); 1074 expect(Ok, status); 1075 1076 /* NULL */ 1077 status = GdipIsVisiblePathPoint(NULL, 0.0, 0.0, NULL, NULL); 1078 expect(InvalidParameter, status); 1079 status = GdipIsVisiblePathPoint(path, 0.0, 0.0, NULL, NULL); 1080 expect(InvalidParameter, status); 1081 status = GdipIsVisiblePathPoint(path, 0.0, 0.0, NULL, NULL); 1082 expect(InvalidParameter, status); 1083 status = GdipIsVisiblePathPoint(path, 0.0, 0.0, graphics, NULL); 1084 expect(InvalidParameter, status); 1085 1086 /* empty path */ 1087 result = TRUE; 1088 status = GdipIsVisiblePathPoint(path, 0.0, 0.0, NULL, &result); 1089 expect(Ok, status); 1090 expect(FALSE, result); 1091 /* rect */ 1092 status = GdipAddPathRectangle(path, 0.0, 0.0, 10.0, 10.0); 1093 expect(Ok, status); 1094 result = FALSE; 1095 status = GdipIsVisiblePathPoint(path, 0.0, 0.0, NULL, &result); 1096 expect(Ok, status); 1097 expect(TRUE, result); 1098 result = TRUE; 1099 status = GdipIsVisiblePathPoint(path, 11.0, 11.0, NULL, &result); 1100 expect(Ok, status); 1101 expect(FALSE, result); 1102 /* not affected by clipping */ 1103 status = GdipSetClipRect(graphics, 5.0, 5.0, 5.0, 5.0, CombineModeReplace); 1104 expect(Ok, status); 1105 result = FALSE; 1106 status = GdipIsVisiblePathPoint(path, 0.0, 0.0, graphics, &result); 1107 expect(Ok, status); 1108 expect(TRUE, result); 1109 1110 GdipDeletePath(path); 1111 GdipDeleteGraphics(graphics); 1112 ReleaseDC(0, hdc); 1113 } 1114 1115 START_TEST(graphicspath) 1116 { 1117 struct GdiplusStartupInput gdiplusStartupInput; 1118 ULONG_PTR gdiplusToken; 1119 1120 gdiplusStartupInput.GdiplusVersion = 1; 1121 gdiplusStartupInput.DebugEventCallback = NULL; 1122 gdiplusStartupInput.SuppressBackgroundThread = 0; 1123 gdiplusStartupInput.SuppressExternalCodecs = 0; 1124 1125 GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL); 1126 1127 test_constructor_destructor(); 1128 test_getpathdata(); 1129 test_line2(); 1130 test_arc(); 1131 test_worldbounds(); 1132 test_pathpath(); 1133 test_ellipse(); 1134 test_linei(); 1135 test_rect(); 1136 test_polygon(); 1137 test_lastpoint(); 1138 test_addcurve(); 1139 test_addclosedcurve(); 1140 test_reverse(); 1141 test_addpie(); 1142 test_flatten(); 1143 test_isvisible(); 1144 1145 GdiplusShutdown(gdiplusToken); 1146 }
__label__pos
0.905405
module Controller.Dialog (openFileDialog,saveFileDialog ,chooseColumnDialog ,chooseColumnsDialog ,chooseRowDialog ,chooseRowsDialog ,chooseColumnsAndTableDialog,chooseRowsAndTableDialog ,previewImage,warning ,nonNumericDataWarningText,nonNumericDataWarning ,noColumnsSelectedText,noColumnsSelected ,noRowsSelectedText,noRowsSelected ,whenSure) where import Control.Applicative ((<$>)) import Control.Monad (when) import Graphics.UI.WX (Prop ((:=)),on) import qualified Graphics.UI.WX as WX import qualified Graphics.UI.WXCore as WXC import System.FilePath (takeDirectory) import Controller (Controller,onView,onGridView,setOnConfig,getFromConfig) import View (View,frame) import View.Component.Grid (getColumnLabels,getRowLabels) import View.GridPage (captions) import View.Dialog.Complex (Layout (..),Widget (..),Modifier (..) ,showSimpleDialog,okButton,cancelButton) import qualified View.Dialog.Simple as Simple import Config (getLastHandledPathForDialogs,setLastHandledPath) import Util (justWhen) import I18n (__) openFileDialog :: String -> [(String,[String])] -> Controller (Maybe FilePath) openFileDialog caption filters = do lastPath <- getFromConfig getLastHandledPathForDialogs result <- onView $ \view -> WX.fileOpenDialog (frame view) True True caption ((__ "Any file",["*"]) : filters) lastPath "" justWhen result $ setOnConfig . setLastHandledPath . takeDirectory return result saveFileDialog :: String -> [(String,[String])] -> Controller (Maybe FilePath) saveFileDialog caption filters = do lastPath <- getFromConfig getLastHandledPathForDialogs result <- onView $ \view -> WX.fileSaveDialog (frame view) True True caption ((__ "Any file",["*"]) : filters) lastPath "" justWhen result $ setOnConfig . setLastHandledPath . takeDirectory return result chooseOneFromListDialog :: String -> String -> String -> [String] -> Maybe Int -> Controller (Maybe Int) chooseOneFromListDialog dialogCaption listCaption okLabel list init = let listBox = SingleListBox list id $ \_ -> id dialog = Modifier Margin $ Column [ Modifier (Boxed listCaption) $ Modifier HFill $ Widget listBox , Modifier Center $ Row [ Widget $ DefaultButton okLabel () , cancelButton]] in maybe Nothing id <$> (onView $ showSimpleDialog dialogCaption dialog init) chooseSomeFromListDialog :: String -> String -> String -> [String] -> [Int] -> Controller [Int] chooseSomeFromListDialog dialogCaption listCaption okLabel list init = let listBox = MultiListBox list id $ \_ -> id dialog = Modifier Margin $ Column [ Modifier (Boxed listCaption) $ Modifier HFill $ Widget listBox , Modifier Center $ Row [ Widget $ DefaultButton okLabel () , cancelButton]] in maybe [] id <$> (onView $ showSimpleDialog dialogCaption dialog init) chooseColumnDialog :: String -> String -> Maybe Int -> Controller (Maybe Int) chooseColumnDialog caption okLabel init = do labels <- onGridView getColumnLabels chooseOneFromListDialog caption (__ "Column") okLabel labels init chooseRowDialog :: String -> String -> Maybe Int -> Controller (Maybe Int) chooseRowDialog caption okLabel init = do labels <- onGridView getRowLabels chooseOneFromListDialog caption (__ "Row") okLabel labels init chooseColumnsDialog :: String -> String -> [Int] -> Controller [Int] chooseColumnsDialog caption okLabel init = do labels <- onGridView getColumnLabels chooseSomeFromListDialog caption (__ "Columns") okLabel labels init chooseRowsDialog :: String -> String -> [Int] -> Controller [Int] chooseRowsDialog caption okLabel init = do labels <- onGridView getRowLabels chooseSomeFromListDialog caption (__ "Rows") okLabel labels init chooseSomethingAndTableDialog :: String -> String -> String -> [String] -> [Int] -> Controller (Maybe ([Int],Maybe Int)) chooseSomethingAndTableDialog dialogCaption someCaption okLabel something init = do tableCaptions <- onView captions let someList = MultiListBox something fst $ \(_,table) some -> (some,table) tableList = SingleListBox tableCaptions snd $ \(some,_) table -> (some,table) dialog = Modifier Margin $ Column [ Modifier Center $ Row [ Modifier (Boxed someCaption) $ Modifier HFill $ Widget someList , Label $ __ "to" , Modifier (Boxed $ __ "Table") $ Widget tableList ] , Modifier Center $ Row [Widget $ DefaultButton okLabel (), cancelButton]] onView $ showSimpleDialog dialogCaption dialog (init,Nothing) chooseColumnsAndTableDialog :: String -> String -> [Int] -> Controller (Maybe ([Int],Maybe Int)) chooseColumnsAndTableDialog caption okLabel init = do labels <- onGridView getColumnLabels chooseSomethingAndTableDialog caption (__ "Columns") okLabel labels init chooseRowsAndTableDialog :: String -> String -> [Int] -> Controller (Maybe ([Int],Maybe Int)) chooseRowsAndTableDialog caption okLabel init = do labels <- onGridView getRowLabels chooseSomethingAndTableDialog caption (__ "Rows") okLabel labels init previewImage :: WXC.Bitmap () -> Controller () previewImage bitmap = let drawBitmap dc _ = WX.drawBitmap dc bitmap WX.pointZero False [] makeWindow dialog = do size <- do width <- WXC.bitmapGetWidth bitmap height <- WXC.bitmapGetHeight bitmap return (width,height) WX.scrolledWindow dialog [ WX.virtualSize := uncurry WX.sz size , WX.clientSize := uncurry WX.sz size , on WX.paint := drawBitmap] dialog = Modifier Margin $ Column [ Widget $ ScrolledWindow makeWindow , Modifier Center $ okButton ()] in onView (showSimpleDialog (__ "Preview") dialog ()) >> return () warning :: String -> Controller () warning = onView . Simple.warning nonNumericDataWarningText :: [String] -> String nonNumericDataWarningText labels = unwords [__ "Column(s)",show labels,__ "contains non-numeric data"] nonNumericDataWarning :: [String] -> Controller () nonNumericDataWarning = warning . nonNumericDataWarningText noColumnsSelectedText :: String noColumnsSelectedText = __ "No columns selected" noColumnsSelected :: Controller () noColumnsSelected = warning noColumnsSelectedText noRowsSelectedText :: String noRowsSelectedText = __ "No rows selected" noRowsSelected :: Controller () noRowsSelected = warning noRowsSelectedText whenSure :: String -> Controller () -> Controller () whenSure caption doThis = let caption' = caption ++ ": " ++ (__ "Are you sure?") in do sure <- onView $ \view -> WX.confirmDialog (frame view) caption' caption' False when sure doThis
__label__pos
0.549128
PageRenderTime 18ms CodeModel.GetById 1ms app.highlight 14ms RepoModel.GetById 1ms app.codeStats 0ms /html/transform/transform_test.go https://code.google.com/p/go-html-transform/ Go | 211 lines | 183 code | 21 blank | 7 comment | 15 complexity | 41312ac61c663cc46db08c96881263fd MD5 | raw file 1/* 2 Copyright 2010 Jeremy Wall ([email protected]) 3 Use of this source code is governed by the Artistic License 2.0. 4 That License is included in the LICENSE file. 5*/ 6package transform 7 8import ( 9 "code.google.com/p/go-html-transform/h5" 10 "testing" 11) 12 13func assertEqual(t *testing.T, val interface{}, expected interface{}) { 14 if val != expected { 15 t.Errorf("NotEqual Expected: [%s] Actual: [%s]", 16 expected, val) 17 } 18} 19 20func assertNotNil(t *testing.T, val interface{}) { 21 if val == nil { 22 t.Errorf("Value is Nil") 23 } 24} 25 26func TestNewTransformer(t *testing.T) { 27 tree, _ := h5.NewFromString("<html><body><div id=\"foo\"></div></body></html>") 28 tf := New(tree) 29 // hacky way of comparing an uncomparable type 30 assertEqual(t, tf.Doc().Type, tree.Top().Type) 31} 32 33func TestTransformApply(t *testing.T) { 34 tree, _ := h5.NewFromString("<html><body><div id=\"foo\"></div></body></html>") 35 tf := New(tree) 36 n := h5.Text("bar") 37 tf.Apply(AppendChildren(n), "body") 38 newDoc := tf.String() 39 assertEqual(t, newDoc, "<html><head></head><body><div id=\"foo\"></div>bar</body></html>") 40} 41 42func TestTransformApplyAll(t *testing.T) { 43 tree, _ := h5.NewFromString("<html><head></head><body><ul><li>foo</ul></body></html>") 44 tf := New(tree) 45 n := h5.Text("bar") 46 n2 := h5.Text("quux") 47 t1, _ := Trans(AppendChildren(n), "body li") 48 t2, _ := Trans(AppendChildren(n2), "body li") 49 tf.ApplyAll(t1, t2) 50 assertEqual(t, tf.String(), "<html><head></head><body><ul><li>foobarquux</li></ul></body></html>") 51} 52 53func TestTransformApplyMulti(t *testing.T) { 54 tree, _ := h5.NewFromString("<html><body><div id=\"foo\"></div></body></html>") 55 tf := New(tree) 56 tf.Apply(AppendChildren(h5.Text("")), "body") 57 tf.Apply(TransformAttrib("id", func(val string) string { 58 t.Logf("Rewriting Url") 59 return "bar" 60 }), 61 "div") 62 newDoc := tf.String() 63 assertEqual(t, newDoc, "<html><head></head><body><div id=\"bar\"></div></body></html>") 64} 65 66func TestAppendChildren(t *testing.T) { 67 node := h5.Anchor("", "") 68 child := h5.Text("foo ") 69 child2 := h5.Text("bar") 70 AppendChildren(child, child2)(node) 71 assertEqual(t, h5.NewTree(node).String(), "<a>foo bar</a>") 72} 73 74func TestRemoveChildren(t *testing.T) { 75 node := h5.Anchor("", "foo") 76 RemoveChildren()(node) 77 assertEqual(t, h5.NewTree(node).String(), "<a></a>") 78} 79 80func TestReplaceChildren(t *testing.T) { 81 node := h5.Anchor("", "foo") 82 assertEqual(t, h5.NewTree(node).String(), "<a>foo</a>") 83 child := h5.Text("baz ") 84 child2 := h5.Text("quux") 85 ReplaceChildren(child, child2)(node) 86 assertEqual(t, h5.NewTree(node).String(), "<a>baz quux</a>") 87} 88 89func TestReplace(t *testing.T) { 90 defer func() { 91 if err := recover(); err != nil { 92 t.Error("TestReplace paniced") 93 } 94 }() 95 node := h5.Div("", nil, h5.Div("", nil, h5.Text("foo"))) 96 replacement := h5.Div("", nil, h5.Text("bar")) 97 Replace(replacement)(node.FirstChild) 98 assertEqual(t, h5.NewTree(node).String(), 99 "<div><div>bar</div></div>") 100} 101 102func TestReplaceSplice(t *testing.T) { 103 defer func() { 104 if err := recover(); err != nil { 105 t.Error("TestReplaceSplice paniced") 106 } 107 }() 108 node := h5.Div("foo", nil, 109 h5.Text("foo"), 110 h5.Element("span", nil, h5.Text("bar")), 111 ) 112 node2 := h5.Element("span", nil, h5.Text("foo")) 113 Replace(node2)(node.FirstChild) 114 assertEqual(t, h5.NewTree(node).String(), 115 "<div id=\"foo\"><span>foo</span><span>bar</span></div>") 116} 117 118func TestReplaceSpliceOnRootNode(t *testing.T) { 119 defer func() { 120 if err := recover(); err == nil { 121 t.Error("TestReplaceSpliceOnRootNode didn't panic") 122 } 123 }() 124 tree, _ := h5.NewFromString("<div id=\"foo\">foo<span>bar</span></div><") 125 doc := tree.Top() 126 ns, _ := h5.NewFromString("<span>foo</span>") 127 f := Replace(ns.Top()) 128 f(doc) 129 assertEqual(t, h5.Data(doc.FirstChild), "span") 130 assertEqual(t, h5.Data(doc.FirstChild.FirstChild), "foo") 131} 132 133func TestModifyAttrib(t *testing.T) { 134 node := h5.Anchor("", "") 135 ModifyAttrib("id", "bar")(node) 136 assertEqual(t, node.Attr[0].Val, "bar") 137 ModifyAttrib("class", "baz")(node) 138 assertEqual(t, node.Attr[1].Key, "class") 139 assertEqual(t, node.Attr[1].Val, "baz") 140} 141 142func TestTransformAttrib(t *testing.T) { 143 node := h5.Anchor("", "") 144 ModifyAttrib("id", "foo")(node) 145 assertEqual(t, node.Attr[0].Val, "foo") 146 TransformAttrib("id", func(s string) string { return "bar" })(node) 147 assertEqual(t, node.Attr[0].Val, "bar") 148} 149 150func TestDoAll(t *testing.T) { 151 tree, _ := h5.NewFromString("<div id=\"foo\">foo</div><") 152 node := tree.Top() 153 preNode := h5.Text("pre node") 154 postNode := h5.Text("post node") 155 f := DoAll(AppendChildren(postNode), 156 PrependChildren(preNode)) 157 f(node) 158 assertEqual(t, h5.Data(node.FirstChild), h5.Data(preNode)) 159 assertEqual(t, h5.Data(node.LastChild), h5.Data(postNode)) 160} 161 162func TestCopyAnd(t *testing.T) { 163 defer func() { 164 if err := recover(); err != nil { 165 t.Errorf("TestCopyAnd paniced %s", err) 166 } 167 }() 168 node := h5.Div("", nil, h5.Div("", nil, h5.Text("foo"))) 169 assertEqual(t, h5.NewTree(node).String(), 170 "<div><div>foo</div></div>") 171 CopyAnd( 172 AppendChildren(h5.Text("bar")), 173 ReplaceChildren(h5.Text("baz")), 174 )(node.FirstChild) 175 assertEqual(t, h5.NewTree(node).String(), 176 "<div><div>foobar</div><div>baz</div></div>") 177} 178 179func TestTransformSubtransforms(t *testing.T) { 180 defer func() { 181 if err := recover(); err != nil { 182 t.Errorf("TestTransformSubtransforms paniced %s", err) 183 } 184 }() 185 tree, _ := h5.NewFromString("<html><body><ul><li>foo</ul></body></html>") 186 187 f, _ := Subtransform(CopyAnd( 188 ReplaceChildren(h5.Text("bar")), 189 ReplaceChildren(h5.Text("baz"), h5.Text("quux")), 190 ), "li") 191 tf := New(tree) 192 t1, _ := Trans(f, "ul") 193 tf.ApplyAll(t1) 194 assertEqual(t, tf.String(), 195 "<html><head></head><body><ul><li>bar</li><li>bazquux</li></ul></body></html>") 196 197} 198 199// TODO(jwall): benchmarking tests 200func BenchmarkTransformApply(b *testing.B) { 201 for i := 0; i < b.N; i++ { 202 tree, _ := h5.NewFromString("<html><body><div id=\"foo\"></div></body></html") 203 tf := New(tree) 204 tf.Apply(AppendChildren(h5.Text("")), "body") 205 tf.Apply(TransformAttrib("id", func(val string) string { 206 return "bar" 207 }), 208 "div") 209 tf.Doc() 210 } 211}
__label__pos
0.987351
Main Content vrworld/isvalid 1 if vrworld object is valid, 0 if not Syntax x = isvalid(vrworld_object) Arguments vrworld_object A vrworld object representing a virtual world. Description A vrworld object is considered valid if its associated virtual world still exists. x = isvalid(vrworld_object) returns an array that contains a 1 when the elements of vrworld_object are valid vrworld objects, and returns a 0 when they are not. You use this method to check whether the vrworld object is still valid. Using a delete or vrclear command can make a vrworld object invalid. Version History Introduced before R2006a
__label__pos
0.900065
Declarative conditional rendering in React View article history Edit article Published: , Updated: Talks about: <a class="post-tag post-tag-breakpoints" href="/tags/breakpoints">breakpoints</a>, <a class="post-tag post-tag-react" href="/tags/react">react</a>, and <a class="post-tag post-tag-rendering" href="/tags/rendering">rendering</a> One feature that often surprises people while teaching them React is that a component does not have to render anything. It seems trivial at first, however it quickly shows that a render-nothing components can reduce boilerplate code and improve code-reuse. In its simplest (shortest) form a render-nothing component looks like the following snippet. It does not actually do anything and is not particularly helpful for anything. You could add it to every other component in your application without breaking or influencing anything. const RendersNothing = () => <></> Now consider the following example, that adds some if-then-else logic to the same component: const MightRenderSomething = () => { if (someCondition) { return <span>hello world!</span> } return <></> } This component encapsulates the if-then-else logic of conditionally rendering a hello world message. Instead of cluttering your entire app with the same logic, you can now simply re-use that same component that contains this if condition. To see the full power of this technique, consider the following example. At first, we are going to define a hook that reads the current window width, then define components that conditionally render based on the current window width, and finally use those components in an example application. const useWindowWidth = () => { const [width, setWidth] = React.useState(0) React.useEffect(() => { const handleResize = () => { setWidth(window.innerWidth) } window.addEventListener("resize", handleResize) return () => { window.removeEventListener("resize", handleResize) } }, []) return width } The following components use that hook to implement UI breakpoints for small (mobile) and large (desktop) screens. Note that the value 768 is just an example - replace it with whatever your design system tells you to. const ForMobileDevicesOnly = (props) => { const windowWidth = useWindowWidth() if (windowWidth < 768) { return <>{props.children}</> } return <></> } const ForDesktopDevicesOnly = (props) => { const windowWidth = useWindowWidth() if (windowWidth >= 768) { return <>{props.children}</> } return <></> } Both of these components simply render nothing when the window width does not have an appropriate size. If the window width does have the right size, they render their children. We can use those components in our application like this: const SomeActualComponent = () => ( <div> <h1>common headline</h1> <ForMobileDevicesOnly> <span>only visible on mobile devices</span> </ForMobileDevicesOnly> <ForDesktopDevicesOnly> <span>only visible on desktop devices</span> </ForDesktopDevicesOnly> </div> ) The above code snippet declares that some part of the UI can only be seen by mobile users, while others can only be seen by desktop users. Parts of the UI that are shared amongst all users are not wrapped by any of the components defined above.
__label__pos
0.997503
Permalink Switch branches/tags Nothing to show Find file Fetching contributors… Cannot retrieve contributors at this time 167 lines (155 sloc) 4.23 KB // Package h provides a HTML generation abstraction for Go. It does so by // allowing you to write some verbose and often annoying looking but extremely // arguably simple and idiomatic Go library to generate HTML for the purposes // of rendering in web browsers. // // This approach though verbose allows for building powerful abstracitons with // simple looking APIs. Look at // https://godoc.org/github.com/daaku/go.h?importers for some examples. // // **Unstable API. Work in progress.** package h import ( "bytes" "context" "fmt" "html" "io" "log" "reflect" "strconv" ) // HTML that renders HTML. HTML is a recursive type which is eventually made // up of Primitives. type HTML interface { HTML(context.Context) (HTML, error) } // Primitive generates HTML. They are terminal, as opposed to recursive like // HTML. Primitive types satisfy HTML, but it is an error to use them as such. type Primitive interface { Write(context.Context, io.Writer) (int, error) } // Write HTML into a writer. func Write(ctx context.Context, w io.Writer, h HTML) (int, error) { var err error for { switch t := h.(type) { case nil: return 0, nil case Primitive: return t.Write(ctx, w) } h, err = h.HTML(ctx) if err != nil { return 0, err } } } // Render HTML as a string. func Render(ctx context.Context, h HTML) (string, error) { buffer := bytes.NewBufferString("") _, err := Write(ctx, buffer, h) return buffer.String(), err } // Compile static HTML into HTML. Will panic if there are errors. func Compile(ctx context.Context, h HTML) HTML { m, err := Render(ctx, h) if err != nil { log.Fatalf("Failed to Compile HTML %v with error %s", h, err) } return Unsafe(m) } // Attributes are automatically rendered and automatically render most // primitive types. type Attributes map[string]interface{} // Render an attribute value. func writeValue(w io.Writer, i interface{}) (int, error) { var res string value := reflect.ValueOf(i) switch value.Kind() { case reflect.Bool: res = strconv.FormatBool(value.Bool()) case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: res = strconv.FormatInt(value.Int(), 10) case reflect.Float32, reflect.Float64: res = strconv.FormatFloat(value.Float(), 'E', 3, 64) case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: res = strconv.FormatUint(value.Uint(), 10) case reflect.String: res = value.String() default: return 0, fmt.Errorf( `Could not write attribute value "%v" with kind %s`, i, value.Kind()) } return fmt.Fprint(w, html.EscapeString(res)) } // Check if a value is empty. func isZero(i interface{}) (bool, error) { value := reflect.ValueOf(i) switch value.Kind() { case reflect.Bool: return value.Bool() == false, nil case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: return value.Int() == 0, nil case reflect.Float32, reflect.Float64: return value.Float() == 0, nil case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: return value.Uint() == 0, nil case reflect.String: return value.String() == "", nil default: return false, fmt.Errorf( `Could not work with attribute value "%v" with kind %s`, i, value.Kind()) } } // Render a attribute value pair. func writeKeyValue(w io.Writer, key string, val interface{}) (int, error) { var err error var skip bool skip, err = isZero(val) if err != nil { return 0, err } if skip { return 0, nil } var written, i int i, err = fmt.Fprintf(w, ` %s`, key) written += i if err != nil { return written, err } // bool values are not written, only the key is if reflect.ValueOf(val).Kind() == reflect.Bool { return written, nil } i, err = fmt.Fprintf(w, `="`) written += i if err != nil { return written, err } i, err = writeValue(w, val) written += i if err != nil { return written, err } i, err = fmt.Fprint(w, `"`) written += i if err != nil { return written, err } return written, nil } // Render attributes with using the optional key prefix. func (attrs Attributes) Write(w io.Writer, prefix string) (int, error) { var written, i int var err error for key, val := range attrs { i, err = writeKeyValue(w, prefix+key, val) written += i if err != nil { return written, err } } return written, nil }
__label__pos
0.999881
Bug #3250 Translation of form.submit Added by Sebastian Fischer over 12 years ago. Updated about 11 years ago. Status: New Priority: Should have Assignee: - Start date: 2009-05-08 Due date: % Done: 0% Estimated time: Description Currently its impossible to translate form.submit <f:form.submit><f:translate key="lang_submit"/></f:form.submit> This construct doesn't work. And in viewhelpertest is no matching example given. And no $this->renderChildren(); is given in SubmitViewHelper.php #1 Updated by Sebastian Fischer over 12 years ago public function render($name = '', $value = '') { $this->tag->addAttribute('type', 'submit'); if ($name !== '') { $this->tag->addAttribute('name', $name); } $content = $this->renderChildren(); if ($content != '') { $value = $content; } if ($value !== '') { $this->tag->addAttribute('value', $value); } return $this->tag->render(); } Here the $value is set with result of renderChildren and now translation works. If renderChildren is empty the default value that is still possible isn't overwritten. Also available in: Atom PDF
__label__pos
0.953632
Erlang TDD hands on project – WorkerNet part 3 The third part will unveil the job layer, the part of the Worker Net “stack” that handles the jobs of the network.  Just as before, it is easy to express the functionality through stories. I want to be able to describe a job as • A set of Files [#wn_file{}] • The possible resource types needed (disjunction) • A list of commands for running the job • A timeout the job is given while running (in seconds) • A job id – primary key for the job I want to be able to register a job in the job layer,  through  any node. Several jobs should be queued on the same resource type(s),  being processed one and one in order as they where queued A job must be possible to cancel before it starts running, through any node. The output generated by a job should be possible to see in realtime,  through any node and stored to logfiles.” Once a job is done, the result should be stored in the file layer,  together with the logs. I want to be able to delete a job once it’s done, through any node.” I want to be able to list all jobs in the system, through any node.” These tests require a lot more setup than the old ones, as the job layer must be able to transfer, add and delete files. The job layer must also be able to list the available resources. Also, what remains is to design the functionality of the layers involved, I have prepared such a design which can be seen in the picture below Job layer design in rough outline The text which is in bold denote certain parts of interest that need fleshing out or be considered some extra times. The parts which require some more consideration are • The requirements for the resource process • How is the job process going to aquire the Pid of all the resource processes? • How is the negotiation going to be implemented? (timeouts etc) • How will the job be executed? Once these parts are answered, wer’e almost done! And since we are doing TDD here, we are expecting answers in the form of tests. As first thing, add a new line to the Makefile for testing the job_layer all: erlc -pa . -o ebin/ src/*.erl test/*.erl test: all erl -pa ebin/ -eval 'eunit:test(wn_resource_layer,[verbose]), init:stop().' erl -pa ebin/ -eval 'eunit:test(wn_file_layer,[verbose]), init:stop().' erl -pa ebin/ -eval 'eunit:test(wn_job_layer,[verbose]), init:stop().' dialyze: dialyzer src/*.erl test/*.erl full: all test dialyze Pick the low hanging fruit first, so, a test with registration of a job and listing of the job, should be nice. %%% @author Gianfranco <[email protected]> %%% @copyright (C) 2010, Gianfranco %%% Created : 26 Dec 2010 by Gianfranco <[email protected]> -module(wn_job_layer_tests). -include_lib("eunit/include/eunit.hrl"). -include("include/worker_net.hrl"). -define(NODE_ROOT, "/Users/zenon/ErlangBlog/worker_net-0.1/node_root/"). local_test_() -> {foreach, fun setup/0, fun cleanup/1, [ {"Can register locally", fun register_locally/0} ]}. register_locally() -> Path = create_file_at(?NODE_ROOT), File1 = #wn_file{id = "File1",file = Path,resides = node()}, File2 = #wn_file{id = "File2",file = Path,resides = node()}, Job = #wn_job{id = "JobId", files = [File1,File2], resources = ['non-existent'], commands = ["ls -l"] }, ?assertEqual(ok,wn_job_layer:register(Job)), [Res] = wn_job_layer:list_all_jobs(), ?assertEqual("JobId",Res#wn_job.id), ?assertEqual(['non-existent'],Res#wn_job.resources), ?assertEqual([File1,File2],Res#wn_job.files), ?assertEqual(["ls -l"],Res#wn_job.commands). %% ----------------------------------------------------------------- setup() -> {ok,_} = net_kernel:start([eunit_resource,shortnames]), erlang:set_cookie(node(),eunit), {ok,_} = wn_file_layer:start_link(?NODE_ROOT), {ok,_} = wn_resource_layer:start_link(), {ok,_} = wn_job_layer:start_link(), ok. cleanup(_) -> clean_up(?NODE_ROOT), ok = net_kernel:stop(), ok = wn_file_layer:stop(), ok = wn_resource_layer:stop(), ok = wn_job_layer:stop(). create_file_at(X) -> Path = X++"EUnitFile", ok = filelib:ensure_dir(X), ok = file:write_file(Path,<<1,2,3>>), Path. clean_up(X) -> case filelib:is_dir(X) of true -> {ok,Files} = file:list_dir(X), lists:foreach( fun(File) -> clean_up(X++"/"++File) end,Files), file:del_dir(X); false -> ok = file:delete(X) end. This first test is quite basic and shows how a job has files that may be located on any node, a job id (JID) and a list of resource types (disjunction). To make this test pass, we need to start implementing the record type. So the new entry in worker_net.hrl is -record(wn_job, {id :: string(), files :: [#wn_file{}], resources :: [atom()], commands :: [string()] }). Next, the actual logic module needs to be implemented, it seems like gen_server fits the bill quite nicely here for the job_layer. For each part, it is good practice to make it as simple as possible, “you ain’t gonna need it” (YAGNI) is a good thing to remember. Without further ado, the implementation that passes the first test, of denoting and registering a job through any node. Take note that two new modules had to be introduced • wn_job_keeper.erl the job process started by the job_layer, sends the pid to the appropriate resource processes. Started after registration. • wn_resource_process.erl the resource process started whenever a new resource is registered As the design requires a resource process to be started for each newly registered process, we need to test the new functionality. The side effect of registering a new resource must be checked. So first out, the modifications to the wn_resource record -record(wn_resource, {name :: string(), type :: [{atom(), non_neg_integer() | infinity}], resides :: node(), pid :: pid() }). as well as the modification to the existing tests in the wn_resource_layer_tests.erl module resource_processes_are_alive([],_) -> ok; resource_processes_are_alive([Expected|Tail],List) -> #wn_resource{name = Name, type = Type, resides = Resides} = Expected, Filtered = lists:filter( fun(#wn_resource{name=N,type=T,resides=R}) -> N == Name andalso T == Type andalso R == Resides end,List), ?assertMatch([_X],Filtered), [T] = Filtered, ?assertEqual(true,rpc:call(node(T#wn_resource.pid),erlang,is_process_alive, [T#wn_resource.pid])), resource_processes_are_alive(Tail,List). The function resource_processes_are_alive/2 was added to each test in appropriate places where a resource is registered.  Once this modification was made, the change was imposed on the wn_resource_layer.erl module. The changes are shown with the %% Change comment try_deregister(State,Name) -> case ets:lookup(State#state.resources,Name) of [] -> {error,noexists}; %% Changed [{Name,WnResource}] -> exit(WnResource#wn_resource.pid,deregistered), ets:delete(State#state.resources,Name), ok end. try_register(State,Resource) -> #wn_resource{name=Name} = Resource, case ets:lookup(State#state.resources,Name) of [] -> %% Changed Pid = wn_resource_process:start(), ets:insert(State#state.resources, {Name,Resource#wn_resource{pid=Pid}}), ok; _ -> {error,already_exists} end. Of course, the minimal new module wn_resource_process.erl is shown %%%------------------------------------------------------------------- %%% @author Gianfranco <[email protected]> %%% @copyright (C) 2011, Gianfranco %%% Created : 11 Jan 2011 by Gianfranco <[email protected]> %%%------------------------------------------------------------------- -module(wn_resource_process). -export([start/0, init/1]). start() -> spawn(wn_resource_process, init, [free]). init(X) -> loop(X). loop(X) -> receive _ -> ok end loop(X). Even though the module is trivial, it is all that is needed for the moment. Keep it simple. Now the job_layer implementation that will make the test pass (and a bit more) %%%------------------------------------------------------------------- %%% @author Gianfranco <[email protected]> %%% @copyright (C) 2011, Gianfranco %%% Created : 4 Jan 2011 by Gianfranco <[email protected]> %%%------------------------------------------------------------------- -module(wn_job_layer). -behaviour(gen_server). -include("include/worker_net.hrl"). %% API -export([start_link/0,register/1,list_all_jobs/0, stop/0]). %% gen_server callbacks -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). -record(state, {jobs % ets table {Name,Pid,WnJob} }). %%%========================================================== %%% API %%%========================================================== -spec(start_link() -> {ok,pid()} | {error,term()}). start_link() -> gen_server:start_link({local, ?MODULE}, ?MODULE, [], []). -spec(register(#wn_job{}) -> ok | {error,term()}). register(WnJob) -> case try_send_files(WnJob#wn_job.files) of ok -> gen_server:call(?MODULE,{add_job,WnJob}); E -> E end. -spec(list_all_jobs() -> [#wn_job{}]). list_all_jobs() -> gen_server:call(?MODULE,list_all_jobs). -spec(stop() -> ok). stop() -> gen_server:call(?MODULE,stop). %%%========================================================== %%% gen_server callbacks %%%========================================================== init([]) -> {ok, #state{jobs = ets:new(jobs_table,[set])}}. handle_cast(_Msg,State) -> {noreply,State}. handle_call(stop,_From,State) -> {stop,normal,ok,State}; handle_call(list_all_jobs,From,State) -> spawn_link(job_collector(From)), {noreply,State}; handle_call(list_jobs,_From,State) -> {reply,[WnJob || {_,_,WnJob} <- ets:tab2list(State#state.jobs)],State}; handle_call({add_job,WnJob}, _From, State) -> JobId = WnJob#wn_job.id, {reply, case ets:lookup(State#state.jobs,JobId) of [] -> Pid = wn_job_keeper:start_link(WnJob), ets:insert(State#state.jobs,{JobId,Pid,WnJob}), lists:foreach( fun(WnResource) -> case resource_is_sufficient(WnJob,WnResource) of true -> signal_resource(Pid,WnResource); false -> ignore end end,wn_resource_layer:list_resources()), ok; [_] -> lists:foreach( fun(File) -> wn_file_layer:delete_file(File#wn_file.resides, File#wn_file.id) end,WnJob#wn_job.files), {error,already_exists} end, State}. handle_info(_Info, State) -> {noreply, State}. terminate(_Reason, _State) -> ok. code_change(_OldVsn, State, _Extra) -> {ok, State}. %%%========================================================== %%% Internal functions %%%========================================================== try_send_files([F|R]) -> case wn_file_layer:add_file(F) of ok -> try_send_files(R); E -> E end; try_send_files([]) -> ok. resource_is_sufficient(WnJob,WnResource) -> JobResourceType = WnJob#wn_job.resources, ResourceTypes = [ T || {T,_} <- WnResource#wn_resource.type], at_least_one(JobResourceType,ResourceTypes). at_least_one([],_) -> false; at_least_one([X|R],T) -> lists:member(X,T) orelse at_least_one(R,T). signal_resource(JobKeeperPid,WnResource) -> WnResource#wn_resource.pid ! JobKeeperPid. job_collector(From) -> Nodes = [node()|nodes()], fun() -> Res = lists:foldr( fun(Node,Acc) -> case rpc:call(Node,erlang,whereis,[?MODULE]) of undefined -> Acc; _Pid -> gen_server:call({?MODULE,Node}, list_jobs)++Acc end end,[],Nodes), gen_server:reply(From,Res) end. This implementation also causes us to add a new component module, the wn_job_keeper.erl. Likewise, this is a minimal module which will just trigger a process to start. %%%------------------------------------------------------------------- %%% @author Gianfranco <[email protected]> %%% @copyright (C) 2011, Gianfranco %%% Created : 13 Jan 2011 by Gianfranco <[email protected]> %%%------------------------------------------------------------------- -module(wn_job_keeper). -export([start_link/1, init/1]). -include("include/worker_net.hrl"). start_link(WnJob) -> spawn_link(wn_job_keeper, init, [WnJob]). init(WnJob) -> loop(WnJob). loop(WnJob) -> receive _ -> loop(WnJob) end. Now, with a base implementation that almost fullfils the first story, I want to be able to describe a job as • A set of Files [#wn_file{}] • The possible resource types needed (disjunction) • A list of commands for running the job • A timeout the job is given while running (in seconds) • A job id – primary key for the job we can try making a test design for the job execution, and adding the timeout field for the record, but that will be handled in the next post – otherwise this post will get gigantic and never be posted as I’ve had a lot of other stuff to do the previous weeks. Cheers /G About these ads Leave a comment Leave a Reply Fill in your details below or click an icon to log in: WordPress.com Logo You are commenting using your WordPress.com account. Log Out / Change ) Twitter picture You are commenting using your Twitter account. Log Out / Change ) Facebook photo You are commenting using your Facebook account. Log Out / Change ) Connecting to %s Follow Get every new post delivered to your Inbox. %d bloggers like this:
__label__pos
0.872406
The advantages of field research in UX design 4 min read. When it comes to user experience (UX) design, research is key. User experience research is a process of understanding user behaviour and needs. It is a critical part of the design process for any digital product. This can be done through different research methods, depending on the project and the team’s needs. There are many types of UX research, but one of the most important is field research. Field research involves observing users in their natural environment and can provide valuable insights into how they interact with your product. This method has several advantages over other forms of user research. In this blog post, we will discuss the advantages of using field research in UX design and show you how to get started!   Advantage #01: Field research provides context Context is important in UX design. You need to understand the user’s environment and how they interact with your product within that context. When you observe users in their natural environment, you better understand the context in which they are using your product. This can give you valuable insights into how they interact with it and what needs they are trying to fulfil. You can see how they use your product in the real world, and you can understand their needs and motivations. This is essential for creating a user-friendly product that meets their needs. Advantage #02: Field research reveals unmet needs User feedback is an important part of UX design, but getting accurate feedback from users is not always easy. When you observe users in their natural environment, you can see how they interact with your product and identify any unmet needs. You can also see which features they use and what they don’t like. This information is essential for improving your product and meeting the needs of your users. Advantage #03: Field research helps to build empathy Empathy is important in UX design. When you understand the needs of your users, you can design a better user experience. Field research helps build empathy by providing insights into your users’ lives. You can see how they live and work and understand their challenges and pain points. This information is essential for designing a user-friendly product. Advantage #04: Field research is cost-effective Field research is a cost-effective way to gather data about your users. It is less expensive than other methods, such as lab research or surveys. You can save money by observing users in their natural environment. Advantage #05: Field research is flexible Field research is a flexible method that can be adapted to suit the needs of your project. You can tailor it to meet your specific requirements and change it as needed. This flexibility makes this method a valuable tool for UX designers. field-research How to get started with field research So, now you know the advantages of using field research in UX design. But how do you get started? Here are some tips: • Choose the right research method: Field research is just one of many research methods. It’s important to choose the right method for your project. Consider your objectives and the needs of your team. • Define your scope: Once you’ve decided to use this method, you need to define the scope of your project. What do you want to learn? What are your objectives? • Identify your target users: Who are you going to research? It’s important to identify your target users and select a representative sample. • Plan your research: Planning is essential for any research project. You need to plan your field research, including the methods you will use, the data you will collect, and the time frame. • Conduct your research: Once you’ve planned it, it’s time to conduct it! Observe users in their natural environment and collect data about their behaviour. • Analyze your data: After you’ve collected it, it’s time to analyze it. Look for patterns and trends in the data. Identify any unmet needs or pain points. • Create a report: Once you’ve analyzed your data, you must create a report. This report will be used to improve your product and meet the needs of your users. Field research is an important part of UX design. It helps to build empathy for users, understand their needs, and identify any unmet needs. It’s a cost-effective way to gather data about your users, and it’s flexible enough to be adapted to suit the needs of your project. If you’re planning a UX research project, consider using field research to get the most accurate data about your users. What are your thoughts? Leave a comment and let me know! If you need any help with your UX research, contact me!
__label__pos
0.83529
Intuitionistic mathematics for physics At MSFP 2008 in Iceland I chatted with Dan Piponi about physics and intuitionistic mathematics, and he encouraged me to write down some of the ideas. I have little, if anything, original to say, so this seems like an excellent opportunity for a blog post. So let me explain why I think intuitionistic mathematics is good for physics. Intuitionistic mathematics, whose main proponent was L.E.J. Brouwer, is largely misunderstood by mathematicians. Consequently, physicists have strange ideas about it, too. For example, David Deutsch somehow managed to write in his otherwise excellent popular science book “The Fabric of Reality” that intuitionists deny existence of infinitely many natural numbers (those would be the ultrafinitists, if there are any). He also produced rather silly arguments against intuitionistic mathematics, which I explained to myself by believing that he never had a chance to learn that intuitionistic mathematics supports his point of view. While Brouwer’s and other preintuitionists’ reasons for intuitionistic mathematics were philosophical in nature, there is today a vibrant community of mathematicians, logicians, computer scientists, and even the odd physicist, who work with intuitionistic mathematics not because of their philosophical conviction but because it is simply the right kind of math for what they are doing. Intuitionistic understanding of truth A common obstacle in understanding intuitionistic logic is the opinion that the difference between classical and intuitionistic logic arises because classicists and intuitionists just happen to disagree about what is true. A typical example of this is the principle known as Proof by Contradiction: For every proposition $\phi$, if $\phi$ is not false then $\phi$ is true. With a formula we write this as $\forall \phi \in \mathsf{Prop}, \lnot \lnot \phi \Rightarrow \phi$. Classical mathematicians accept it as true. Intuitionists do not accept it, but neither do they claim it is false. In fact, they claim that the principle has no counterexamples, that is $\lnot \exists \phi \in \mathsf{Prop}, \lnot (\lnot \lnot \phi \Rightarrow \phi)$. This becomes very confusing for classical mathematicians who think that the two displayed formulae are equivalent, because they believe in Proof by Contradiction. It is like believing that the Earth is flat while trying to make sense of Kepler’s Laws of planetary motion. The difference between intuitionistic and classical logic is in the criteria for truth, i.e., what evidence must be provided before a statement is accepted as true. Speaking vaguely, intuitionistic logic demands positive evidence, while classical logic is happy with lack of negative evidence. The intuitionist view is closer to the criterion of truth in science, where we normally confirm a statement with an experiment (positive evidence), but this analogy should not be taken too far. What counts as “evidence” is open to interpretation. Before I describe the three most common ones below, let me just explain the difference between $\phi$ (“$\phi$ is true”) and $\lnot \lnot \phi$ (“$\phi$ is not false”). Intuitionistically: • $\phi$ holds if there is positive evidence supporting it, • $\lnot \phi$ holds if it is contradictory to assume $\phi$, that is to say, evidence of $\phi$ would entail a contradiction. • $\lnot \lnot \phi$ holds if it is contradictory to assume that it is contradictory to assume $\phi$. That is a bit complicated. In essence, it says that $\lnot \lnot \phi$ is accepted when there is no evidence against it. In other words, $\lnot \lnot \phi$ means something like “$\phi$ cannot be falsified” or “$\phi$ is potentially true”. For example, if someone says “There is a particle which does not interact with anything in the universe.” that would be a statement which is not accepted as true, for how would you ever present positive evidence? But it is accepted as potentially true, for how would you ever falsify it? A statement which is logically equivalent to one of the form $\lnot \lnot \phi$ is called doubly negated. For the purposes of this post I shall call a statement $\phi$ potentially true if its double negation $\lnot \lnot \phi$ is true. It seems nontrivial to come up with useful statement in physics which are only potentially true (but see the discussion about infinitesimals below). Perhaps Karl Popper would have something to say about that. Let me now describe three most common interpretations of “evidence” in intuitionistic logic. Computational interpretation This is the interpretation of intuitionistic logic commonly presented in computer science. We view all sets as represented by suitable data structures—a reasonable point of view for a computer scientist. Then a statement is taken to be true if there exists a program (computational evidence) witnessing its truth. To demonstrate the idea, consider the statement $\forall x \in A, \exists y \in B, \phi(x, y)$. This is taken to be true if there exists a program which accepts $x$ and outputs $y$ together with computational evidence that $\phi(x,y)$ holds. Another example: the statement $\forall x \in A, \phi(x) \lor \psi(x)$ is true if there exists a program which takes $x$ as input and outputs either $0$ and evidence of $\phi(x)$, or $1$ and evidence of $\psi(x)$. In other words, the program is a decision procedure which tells us which of the two disjuncts holds, and why. Under this interpretation the Law of Excluded Middle fails because there are unsolvable decision problems, such as the Halting problem. The computationally minded readers might entertain themselves by figuring out a computational explanation of potentially true statements (Hint: first interpret Pierce’s Law in terms of continuations). I have not done it myself. Topological interpretation We may replace the phrases “data structure” and “program” in the computational interpretation by “topological space” and “continuous function”, respectively. Thus a statement is true if it is witnessed by a continuous function which transforms input (hypotheses) to output (conclusions). The basis for this explanation may be found in physics if we think about what it means for a function to be continuous in terms of communication or information processing. Suppose an observer wants to communicate a real-valued quantity $x$ to another observer. They can do it in many ways: by making sounds, by sending electromagnetic signals, by sending particles from one place to another, by manufacturing and sending a stick of length $x$ by mail, etc. However, as long as they use up only a finite amount of resources (time, space, energy) they will be able to communicate only a finite amount of information about $x$. Similarly, in any physical process (computer, brain, abacus) which transforms an input value $x$ to an output value $f(x)$ the rate of information flow is finite. Consequently, in finite time the process will obtain only a finite amount of information about $x$, on the basis of which it will output a finite amount of information about $f(x)$. This is just the definition of continuity of $f$ phrased in terms of information flow rather than $\epsilon$ and $\delta$. Notice that we are not assuming that $f$ is computable because we do not want to make the rather sweeping assumption that all physical processes are computable. The conclusion is that “all functions are continuous”, including those that witness truth of statements. You might be thinking that an analog-to-digital converter is a counterexample to the above argument. It is a device which takes as input an electric signal and outputs either 0 or 1, depending on whether the voltage of the signal is below or above a given threshold. Indeed, this would be a discontinuous function, if only such converters worked exactly. But they do not, they always have a tolerance level, and the manufacturer makes no guarantees about it working correctly very close to the threshold value. A useful exercise is to think about the difference between “all functions are continuous”, “potentially all functions are continuous”, and “all functions are potentially continuous”. Which one does the above argument about finite rate of information processing support? Local truth This explanation of intuitionistic logic is a bit more subtle, but also much more powerful and versatile. It is known by categorical logicians as the Kripke-Joyal or sheaf semantics, while most logicians are familiar at least with the older Kripke semantics. Imagine a planet and a meteorologist at each point of the surface, measuring the local temperature $T$. We assume that $T$ varies continuously with position. A statement such as $T > 273$ is true at some points of the planet and false at others. We say that it is locally true at $x$ if there exists a small neighborhood around $x$ where it is true. In other words, a statement is locally, or stably true at a given point if it remains true when we perturb the point a little. On this planet a statement is globally true if it is locally true everywhere, and it is globally false if its negation is locally true everywhere. There are also many intermediate levels of truth. The truth value (a measure of truth) of a statement is the set of those points at which the statement is locally true. Such a set is always open. The explanation so far is a bit wrong. For a statement to be locally true at $x$, not only must it be true in a neighborhood of $x$, but it must also be true everywhere in the neighborhood “for the same reason”. For example, the statement $T > 273$ or $T \leq 273$ is true at $x$ if there exists a neighborhood $U$ of $x$ such that $T > 273$ everywhere on $U$, or $T \leq 273$ everywhere on $U$. The reason, namely which of the two possibilities holds, must be the same everywhere on $U$. The truth value of $T = 273$ is the interior of the set of those points at which $T$ equals 273, while the truth value of $T \neq 273$ is the exterior of the set of those points at which $T$ equals 273. Thus the truth value of the disjunction $T = 273$ or $T \neq 273$ need not be the entire planet—it will miss isolated points at which $T$ is 273. The Law of Excluded Middle is not valid. By changing the underlying space and topology, we can express various notions of truth. We can, for example, incorporate passage of time, or a universe branching into possible worlds. In the most general case the underlying space need not even be a space, but a category with a so-called Grothendieck topology which determines what “locally” means. Apart from being a wonderful mathematical tool, it should be possible to use sheaf semantics to clarify concepts in physics. I would expect the notions of “truth stable under small perturbation” and “truth local to an observer” to appeal to physicists. Fancy kinds of sheaf semantics have been proposed to explain features of quantum mechanics, see for example this paper by Bas Spitters and his coworkers. Smooth infinitesimal analysis Philosophical explanations and entertaining stories about intuitionistic mathematics are one thing, but getting actual benefits out of it are another. For physicists this means that they will want to calculate things with it. The good news is that they are already doing it, they just don’t know it! There is something odd about how physicists are taught mathematics—at least in my department. Physics majors learn the differential and integral calculus in the style of Cauchy and Weierstrass, with $\epsilon$–$delta$ definitions of continuity and differentiability. They are told by math professors that it is a sin to differentiate a non-differentiable function. They might even be told that the original differential and integral calculus, as invented by Leibniz and Newton, was flawed because it used the unclear concept of infinitesimals, which were supposed to be infinitely small yet positive quantities. Then these same students go to a physics class in which a physics professor never performs $\epsilon$–$\delta$ calculations, freely differentiates everything in sight, and tops it off by using the outlawed infinitesimals to calculate lots of cool things. What are the students supposed to think? Clearly, the “correct” mathematics is useless to them. It’s a waste of time. Why aren’t they taught mathematics that gives a foundation to what the physics professors are actually doing? Is there such math? Yes there is. It’s the mathematics of infinitesimal calculus, brought forward to the 20th century by Anders Kock and Bill Lawvere under the name Synthetic Differential Geometry (SDG), or Smooth Infinitesimal Analysis. (I am too young to know exactly who invented what, but I’ve heard people say that Eduardo Dubuc also played a part. I would be happy to correct bibliographical omissions on my part.) By the way, I am not talking about Robinson’s non-standard analysis, which uses classical logic. This is not the place to properly introduce synthetic differential geometry. I will limit myself to a few basic ideas and results. For a first reading I highly recommend John Bell’s booklet A Primer of Infinitesimal Analysis. If you refuse to read physical books, you may try his shorter An Invitation to Smooth Infinitesimal Analysis online. For further reading Anders Kock’s Synthetic differential geometry is an obvious choice (available online!), and there is also Moerdijk and Reyes’s Models of smooth infinitesimals analysis, which shows in detail how to construct models of SDG using sheaves of germs of smooth functions. To get a feeling for what is going on, and why intuitionistic logic is needed, let us review the usual proof that infinitesimals do not exist. This requires a bit of logical nitpicking, so bare with me. Both intuitionistic and classical mathematics agree that there is no real number $x$ which is neither negative, nor zero, nor positive: $\lnot \exists x \in \mathbb{R}, \lnot (x < 0) \land \lnot (x = 0) \land \lnot (x > 0)$. (There is some disagreement as to whether every number is either negative, zero, or positive, but that is beside the point right now.) A nilpotent infinitesimal of second degree, or just infinitesimal for short, is a real number $dx$ whose square is zero. Any such $dx$ is neither negative nor positive, because both $dx > 0$ and $dx < 0$ imply $dx^2 > 0$, which contradicts $dx^2 = 0$. If $dx$ were also non-zero, we would have a number which is neither negative, zero, nor positive. Thus we proved that an infinitesimal cannot be non-zero: $dx^2 = 0 \Rightarrow \lnot \lnot (dx = 0)$. A classical mathematician will now conclude that $dx = 0$ by applying Proof by Contradiction. Intuitionistically we have only shown that infinitesimals are potentially equal to zero. But are there any infinitesimals which are actually different from zero? It can be shown from the main axiom of SDG (see below) that non-zero infinitesimals potentially exist. It is a confusing world: on one hand all infinitesimals are potentially zero, but on the other non-zero ones potentially exist. Like all good things in life, intuitionistic mathematics is an acquired taste (and addictive). Can a physicist make sense of all this? We may think of infinitesimals as quantities so small that they cannot be experimentally distinguished from zero (they are potentially zero), but neither can they be shown to all equal zero (potentially there are some non-zero ones). By the way, we are not talking about lengths below Planck length, as there are clearly reals numbers smaller than $1.6 * 10^(-35)$ whose square is positive. The actual axiom which gets the infinitesimal calculus going does not explicitly state anything about non-zero infinitesimals. Instead, it expresses the principle of micro-affinity (sometimes called micro-linearity) that physicists use in their calculations. Principle of micro-affinity: An infinitesimal change in the independent variable $x$ causes an affine (linear) change in the dependent variable $y = f(x)$. More precisely, if $f : R \to R$ is any function, $x \in R$ and $dx$ is an infinitesimal, then there exists a unique number $f'(x)$, called the derivative of $f$ at $x$, such that $f(x + dx) = f(x) + f'(x) dx$. This principle has many consequences, such as potential existence of non-zero infinitesimals described above. For actual calculations the most important consequence is Law of cancellation: If $a$ and $b$ are real numbers such that $a \cdot dx$ = $b \cdot dx$ for all infinitesimals $dx$ then $a = b$. What this says is that we may cancel infinitesimals when they are arbitrary. This is important because infinitesimals do not have inverses (they are potentially zero). Nevertheless, we may cancel them in an equation, as long as they are arbitrary. Let me show how this works in practice by calculating the derivative of $f(x) = x^2$. For arbitrary infinitesimal $dx$ we have $f'(x) \cdot dx = f(x + dx) – f(x) = (x + dx)^2 – x^2 = x^2 + 2 x \cdot dx + dx^2 – x^2 = 2 x \cdot dx$ where we used the fact that $dx^2 = 0$. Because $dx$ is arbitrary, we may cancel it on both sides and get $f'(x) = 2 x$. I emphasize that this is a mathematically precise and logically correct calculation. It is in fact very close to the usual treatment which goes like this: $f'(x) = (f(x+dx) – f(x))/dx = (x^2 + 2 x \cdot dx – dx^2 – x^2)/dx = 2 x + dx = 2 x$ There are two incorrect steps here: we divided by an infinitesimal $dx$ without knowing that it is different from zero (it isn’t!), and we pretended that $2 x + dx$ is equal to $2 x$ because “$dx$ is very small”. By the same reasoning we should have concluded that $f(x+dx) – f(x) = f(x) – f(x) = 0$, but we did not. Why? The principle of micro-affinity allows us to easily derive the usual rules for computing derivatives, the potential existence of non-zero infinitesimals, prove the fundamental theorem of calculus in two lines, derive the wave equation like physicists do it, etc. And it is all correct, exact math. No approximations, no guilty feeling about throwing away “negligible terms” here but not there, and other hocus-pocus that physicists have to resort to because nobody told them about this stuff. Just for fun, let me compute more derivatives. The general strategy in computing $f'(x)$ is to consider an arbitrary infinitesimal $dx$ and express $f'(x) \cdot dx = f(x + dx) – f(x)$ as a quantity multiplied by $dx$. Then we cancel $dx$ on both sides and get $f'(x)$. Throughout we use the fact that $dx^2 = 0$. Here we go: • The derivative of $x^n$ is $n \cdot x^(n-1)$: $(x+dx)^n – x^n = x^n + n x^(n-1) \cdot dx – x^n = n x^(n-1) \cdot dx$ • Leibniz’s formula for derivatives of products $(f(x)\cdot g(x))’ = f'(x) \cdot g(x) + f(x) \cdot g'(x)$: $f(x+dx) \cdot g(x+dx) – f(x) \cdot g(x) = $ $(f(x) + f'(x) \cdot dx) (g(x) + g'(x) \cdot dx) – f(x) \cdot g(x) =$ $(f'(x) g(x) + f(x) \cdot g'(x)) \cdot dx$. • Chain rule $f(g(x))’ = f'(g(x)) \cdot g'(x)$ $f(g(x+dx)) – f(g(x)) =$ $f(g(x) + g'(x) \cdot dx) – f(g(x)) =$ $f(g(x)) + f'(g(x)) \cdot g'(x) \cdot dx – f(g(x)) =$ $f'(g(x)) \cdot g'(x) \cdot dx$ where we used the fact that $g'(x) \cdot dx$ is infinitesimal because its square is zero. There you have it, in a paragraph we derived precisely and in sufficient detail what usually takes a whole lecture of $\epsilon$–$\delta$ manipulations. If we stick to classical logic, the Principle of micro-affinity is false. To see this, consider a function with a jump, such as $j(x) = 0$    if $x < 0$ $j(x) = 1$    if $x \geq 0$ At $x = 0$ the principle of micro-affinity fails. This is a counterexample only in classical mathematics because intuitionistically we cannot prove that there is a function with a jump. Concretely, the above definition of $j(x)$ is not intuitionistically valid because it presupposes $\forall x \in R, x < 0 \lor x \geq 0$. Space-time anomalies But wait! Intuitionistically we can construct non-differentiable continuous functions, such as the absolute value $f(x) = |x|$, for which the principle of micro-affinity fails, too. Well, I am not telling the whole story. The smooth real line $R$ of infinitesimal analysis is not the usual real line $\mathbb{R}$, as constructed by Richard Dedekind. It does not support computation of absolute values. This seems pretty bad. If we cannot have a function such as the absolute value, then it is not clear how to model phenomena that involve sudden change of direction, such as reflection of light and collision of particles. Can rays not bounce of mirrors, intuitionistically? Yes they can, it is just that the intuitionistic treatment of sudden changes is more profound than the classical one. Consider a particle which moves freely up to time $t_0$, then bounces off a wall, and moves freely after that. Its position $p$ is described as a function of time $t$ in two parts, $p(t) = p_1(t)$    if $t \leq t_0$ $p(t) = p_2(t)$    if $t \geq t_0$ where $p_1$ and $p_2$ are smooth functions, and $p_1(t_0) = p_2(t_0)$. Because $p$ is defined separately for $t \leq t_0$ and for $t \geq t_0$, its domain of definition is the union of two half-lines $D = \lbrace t \in R \mid t \leq t_0 \rbrace \cup \lbrace t \in R \mid t \geq t_0 \rbrace$. Classical mathematics proves that $D = R$, which amounts to forgetting that $t_0$ is a special moment. In the smooth world, $D$ is only a subset of $R$, but is not equal to $R$ because it carries more information than $R$. As strange as this may seem, it is useful because it encodes moments in time or places in space where special things happen, such as sudden change of movement or sudden change of density. Smooth space-time, say $R^4$, allows only smooth motion and smooth distribution of mass. If we place non-smooth mass in it, the space will change to a subset of $R^4$ which carries additional information about the anomalies contained in it. This post has become very long so I will stop here. 36 thoughts on “Intuitionistic mathematics for physics 1. First of all, great post! I had encountered intuitionistic maths before and thought “more mathematical logicians playing games with definitions”. As I did more computer science I could see why one would like to use constructive maths but the mathematician in me thought proof by contradiction was too large a sacrifice to make. Recently, I read about a non-constructive proof that there are irrational and b such that ab is rational. This pushed me further away from the non-constructive viewpoint. I think the treatment of infinitesimals you describe to has sent me over the line to the intuitionist camp. Furthermore, it provides a logically sound basis to automatic differentiation using dual numbers – exactly the nilpotent infinitesimals of second degree you talk about. That said, I’m still uncomfortable with the interpretation of the absolute value function. Could you please expand on this some more? Does it have any relationship with your earlier post in which you talk about all computable functions being continuous? 2. Answer to mark’s question about absolute value: suppose we had an operation which assigned to every number `x` a number `|x|` such that `x < 0 => |x| = -x` and `x > 0 => |x| = x`, where `a <= b` is defined as `not (b < a)`. Above we proved that for infinitesimal `dx` we have `0 <= dx <= 0`, therefore both `|dx| = dx` and `|dx| = -dx` from which it follows that `dx = 0`. So, if absolute value exists then all infinitesimals are zero. But this contradicts the principle of micro-affinity, because if all infinitesimals are zero, then for all infinitesimal `dx` `0 * dx = 0 = 1 * dx` and now by the law of cancellation we get `0 = 1`. But this is nothing to despair about, because the absolute value exists as a function `{x in R  |  x <= 0 or 0 <= x} -> R`, defined by `|x| = x`     if `x >= 0` `|x| = -x`     if `x <= 0` It’s just that you need to know the sign of `x` in order to compute its absolute value. 3. Having read the invitation by Bell that you link to I can see why things appear strange to me. In particular, the notion of the “indecomposability” of the intuitionist’s reals is what appears to be behind the lack of discontinuities. Furthermore, you cannot define the absolute value function in the usual way because that presupposes being able to split the reals at 0. My ideas of continuity are tied up with the point set topology I studied many years ago. I think I will have to rethink several basic definitions, such as open and closed sets, in light of intuitionist logic to better understand what’s going on here. 4. Is it fair to say “non-zero infinitesimals potentially exist” (which I would parse as ¬¬∃dx∈R [¬(dx = 0) ∧ dx^2 = 0])? I would actually take this to be provably false (as an intuitionistic field, non-zeros should be the same as invertibles, and thus be closed under squaring). It seems the more accurate statement would be “not all infinitesimals are zero” (which I would parse as ¬∀dx∈R [dx^2 = 0 => dx = 0]). But perhaps I should just use a different translation from ordinary language to formal intuitionistic logic… 5. Answer to Sridhar: it is not true that in an intuitionistic field non-zeros coincide with invertibles. In intuitionistic algebra we use apartness relation, which is a constructivized version of non-equality, rather than non-equality itself. Thus the axiom about invertible elements in the field is “if `x` is apart from zero, then it is invertible” (and not “if `x` is not zero, then it is invertible). Concretely, in an ordered field apartness is defined as: `x` and `y` are apart iff `x < y` or `x > y`. So you can only invert an element if you know that it is negative or positive. It does not generally follow that every non-zero element is negative or positive (I actually mention this issue in the post above). I hope that answers your question. In the case of smooth reals we can show that for every infinitesimal `x` we have “`x` is not apart from zero”. So you cannot have an invertible infinitesimal (although there are variants of the axioms of smooth analysis which give you that). 6. Hm, I see. But it’s still not clear to me how to derive the potential existence of non-zero infinitesimals from the principle of microaffinity. Could you expand on that? For example, taking the formalization of this statement as I stated before, `not not exists dx in R, (not (dx = 0) and dx^2 = 0)`, this should be intuitionistically equivalent to `not forall dx in R, not (not (dx = 0) and dx^2 = 0)`, which in turn should be equivalent to `not forall dx in R, (dx^2 = 0 => not not (dx = 0))`, but this is the negation of what you prove above. Where am I going wrong? 7. Also, my impression that the relevant notion of “intuitionistic field” was one in which “¬(x = 0) iff x is invertible” came from, for example, the last field axiom in section R_1 of p. 102 of Bell’s “A Primer of Infinitesimal Analysis”. Of course, I understand now that there are also other reasonable conceptions of intuitionistic fields, but this would seem to be the one at use in at least Bell’s formulation of smooth infinitesimal analysis. 8. Hm, somehow, my math symbols have turned garbled in some (but not all) of my above comments, even though they were fine before. Odd… 9. I feel like i have poked a hole in it. Something that is potentially true might actually be true: Let A be a true statement, now suppose A is false; this conflicts with A being true, so A is not false. So infinitesimals are potentially in danger. x0 => f(x)>0 or f(x)>0 f(0)=0 Say we have a z, a ‘f-infinitessimal’: f(z)=0. Then z>0 or z or f(z)f(g(x)) ‘h-infinitessimal’. But if f(g(z))=0, if g(z)=0, then f(g(z))= f(0)=0, so it is in there, so the ‘g-infinitessimals’ are a subset of the ‘h-infinitesimals’. Now, make h the identity, for which e(z)=0 implies z=0, you see that for all the infinitesimals based on functions like this are zero; there are no infinitesimals. So… What am i doing wrong? 10. Dear Jasper, I am glad you are interested in this topic. My post is not intended to be a complete description of intuitionistic logic and infinitesimal analysis. It is quite impossible to learn these topics just from my post, so I recommend that you look at the literature cited in the post, especially the “Primer” by John Bell. Regarding your comments, let me just say that my post contains at least one error. Namely, the interpretation of `x != y` should be apartness rather than inequality. By this I mean that `x != y` is defined as `x < y or y < x`, rather than the classical `not (x = y)`. I thank Sridhar Ramesh for noticing the problem. Let me also comment briefly on what you wrote. Firstly, in comment 11, I am not sure what the purpose of the beginning of the post is: you assume A is true and false at the same time, so no wonder you can get anything you like out of that. Yes, the principle of micro-affinity states that the derivative is unique, I did not write that clearly enough. The Law of Cancellation is proved from the Axiom of Micro-Affinity, see John Bell's primer. When I speak of "arbitrary" infinitesimals in the Law of Cancellation, what I mean is that in order to cancel `dx` in an equation `a * dx = b * dx` you first have to prove `forall dx, dx^2 = 0 => a * dx = b * dx`, which is not the usual law of cancellation `forall a, b, c, c != 0 and a * c = b * c => a = b`. There are meta-theorems saying that the usual derivatives are the same as the ones developed in smooth analysis, again see John Bell’s Primer. Further, if `dx^2 = 0` then `not (dx != 0)`, i.e., you cannot have an infinitesimal which is apart from 0. So when you assumed that you had one like that, of course you can derive anything, among other things your `dx < 0 or dx > 0`. If `dx` is infinitesimal then `-1/n < dx < 1/n` for all natural numbers `n`, i.e., this is what we mean when we say that infinitesimals are "infinitely small". I am not sure what you mean by those `f`-infinitesimals. I think you should think more carefully about them and make sure you do not use classical logic on the way. 11. Hello! I’m interested in understanding what you’ve written about. I’ve been convinced of intuitionist logic, philosophically, since taking Logic 101 as an undergrad philosophy major. I don’t know how many times I heard the professor say, “Matthew, we’re NOT studying Brouwer!” I had a great deal of difficulty in that class, in no small part because the law of the excluded middle doesn’t make sense to me, and we were expected to reason with it. (I scored a 1530 on the SAT; intelligence wasn’t the problem.) For the purpose of this question, let’s just say my current level of mathematics proficiency is basic, college-level algebra. (I could use a review.) Can you recommend a series of texts I could study, or courses I could take, that would equip me to even understand the “Primer” by John Bell? I’ve asked this same question over the years, but the answer I’ve gotten repeatedly is that I first need to study logics and maths that acknowledge and use the law of the excluded middle… and then I can get into intuitionist logic and math. I’m not interested in doing this. Surely, I keep thinking: there has to be a way for me to learn more logic and math that doesn’t require me to turn a blind eye to reality! Thank you for your suggestions, and thank you for this fine post! 1. Dear Matthew, the math we are talking about here is at about advanced undergraduate level. It would help if you knew the basics of calculus. For a super-quick and dirty course in calculus I would probably recommend one of Schaum’s outline series books, perhaps “Calculus demystified”, but before you cough up 20 dollars for that book have a look at the Wikibooks Calculus. Since you are coming from the philosophical angle I definitely suggest that you have a look at Erret Bishop’s manifesto, which is the introduction to his Foundations of Constructive Analysis. It is a complete lie that you have to study classical mathematics in order to be able to later get to intuitionistic mathematics. Quite the contrary, once your brain has been trained how to use the law of excluded middle it is a real effort to deprogram yourself and be able to reason constructively. Your teachers were just trying to shut you up. 12. Dear Andrej, Thank you for your prompt reply and encouragement! Though the Amazon.com reviews of Calculus Demystified weren’t very positive, by and large, the Schaum’s outline series looks good. The Life of Fred series, which was recommended to me in the past, also looks like it might be a winner. It’s good to hear both that it’s likely possible to avoid a study of classical mathematics in order to be able to get to intuitionist mathematics, and also that my concern about potential difficulties unlearning the classical stuff wasn’t unfounded. Again, thanks! 13. This goes on my list of things which I would gain immeasurable benefit from being told 10 years ago. 14. This is very intriguing. Thank you for this post! What I don’t get is how you compute the derivative of x^n. It seems to me that you make the assumption that the derivative is n*x^(n-1), put that in and arrive at your assumption. I guess I’m missing something here. I tried to compute the derivative of f(x)=a^x (for some constant a), starting with f'(x) dx = f(x + dx) – f(x) = a^(x + dx) – a^x Now, I know that I can express a^(x + dx) by a^x + f'(x) dx, but that only leads me back to f'(x) = f'(x). What would be the next step to complete the derivation? 15. Janno: For computing the derivative of $x^n$, Andrej uses the binomial theorem to expand $(x + dx)^n$. Here’s how to compute the derivative of $e^x$: $e^{x + dx} – e^x = e^x e^{dx} – e^x = e^x (1 + dx + (dx)^2/2 + \cdots – 1) = e^x (1 + dx – 1) = e^x dx$. The infinite series is actually finite, as all terms of order 2 and greater are identically zero. 16. Anonymous: You’re right about the derivative of $x^n$ and wrong about the derivative of $e^x$. Before you write down an infinite sum, like you did, you have to argue that they make sense. Which they don’t in synthetic differential geometry. There are no axioms that ensure completeness of the smooth real line with respect to Cauchy sequences. Nor can there be such axioms, because they would allow us to define a non-smooth function as an infinite sum. Actually, the exponential function is introduced as the solution of the differential equation $y’ = y$ with the initial condition $y(0) = 1$, so the derivative of $e^x$ is postulated to be $e^x$. 17. For years I have looked to find a community of mathematicians who are not comfortable with analysis as it is traditionally taught. When I was young in college I started to explore intuitionism, but my life took a different path, yet I always retained an interest in mathematics as a ‘talented amateur’! I often wondered if it wasn’t possible to construct a mathematics in which dx was a new number outside of the systems constructed for the real numbers, such that dx^2 vanished but dx had no multiplicative inverse. Much appreciate your links. 18. A couple months ago, I did a course on smooth infinitesimal analysis with high school students, based on this post. It is of course tricky to do logic stuff with school students, since they typically have no background in writing and understanding (informal, but reasonably precise) proofs. Still, they liked it, were able to work with infinitesimals, and definitely got the main points. Here are the notes (in German only, I’m sorry). 19. Since we’re working with constructive maths, it’s worth pointing out that you can construct $dx$ such that $dx \cdot dx = 0$ quite easily. Consider the $2 \times 2$ matrix $$\begin{matrix} 0 & 1 \\ 0 & 0 \end{matrix}$$ Its square is zero. Now an ordinary number $a$ is $$\begin{matrix} a & 0 \\ 0 & a \end{matrix}$$ and $a + dx$ is $$\begin{matrix} a & 1 \\ 0 & a \end{matrix}$$ Now simply carry out $(a + dx)^2$ using ordinary linear algebra. You will get $$\begin{matrix} a^2 & 2 a \\ 0 & a^2 \end{matrix}$$ which shows that $2 a dx$ is correct. This’ll work for $x^n$ as well as everything else. You just need to expand the definition of ‘number’ to $2 \times 2$ matrices. Also look at nilpotent Lie groups for the multivariate case. 20. Your suggestion is a well known one. If I am not mistaken you’re just suggesting that we work in the ring $\mathbb{R}[x,y]/(y^2)$, but you’re giving us the ring in terms of matrix represenations. This gives us the infinitesimals, but not everything. For example, in synthetic differential geometry the smooth reals $R$ form an (intuitionistic) ordered field, but you’ve only got a ring. There are also more global obstacles to be overcome. For instance, the total tangent space of any object $A$ is simply the exponential $A^\Delta$. To get these sorts of features you would have to add a lot more structure to your setup, and I think by the time you’d be done you would invent models of synthetic differential geometry. 21. I agree, it’s only an entry point. It’s useful for students, I would say, to have a concrete example of existence. Also, automatic differentiation schemes are based on little more than this, and have become very important in neural networks. SIA is amazing, on the other hand, for the view of the continuum that it provides; the continuum cannot be explained entirely in terms of the discrete, which is as it should be. Dual numbers really should be taught in high-school, as well as intuitionistic logic. For that matter, I’ve had enough of people calling sqrt(-1) an “imaginary” number. It’s not. Here it is: 0 -1 1 0 The square of that is -1 0 0 -1 which is of course -I, where I is the unit. “Complex” numbers are just 2×2 matrices a -b b a People joke that math is just ‘castles floating in the air’, and ‘a religion’, but that’s because teachers don’t talk about these well known constructions that completely ground the subject, and lead to a better understanding. 22. http://arxiv.org/abs/1104.1492 I liked this paper on Fermat reals. They also discuss the philosophical points, and how all of this is really accessible to high-school students. As somebody else said, this is yet more stuff it would have been great to know 10 years ago (or, well, 20 in my case). Thanks again for your post, it really helped me. 23. This is great! I particularly liked the description of how a particle bouncing of a wall may require a different notion of the continuum – it reminds me of something that I’ve read about Aristotles close examination of Zenos paradox of motion: that the continuum is only potentially infinitely divisible but not actually so, and that the continuum is indecomposable; do you happen to know how well recieved intuitionistic concepts of the continuum are in interpretations of QM? 24. @AndrejBauer: How do you justify writing $\forall \phi \in \mathsf{Prop}$? Sets and elements are first-class mathematical objects of the theory. But propositions are objects about which one can only speak in the meta theory to that theory. 1. Wherever did you get the idea that logic is outside mathematics? Anyhow, you should read $\mathsf{Prop}$ as the set of all truth values, not the set of all formulas. In other words $\mathsf{Prop}$ is isomorphic to the powerset of a singleton $\mathcal{P}(\lbrace\star\rbrace)$. For instance, you can express excluded middle as $\forall S \in \mathcal{P}(\lbrace\star\rbrace) . \star \in S \lor \lnot (\star \in S)$. Does that make you happier? 1. You say that Prop is in fact not the set of all propositions (= formulas). Why, then, do you denote the set of truth values “Prop”? “Prop” immediately suggests that it is the set of all propositions. You say that I can imagine $\mathsf {Prop}$ to be the set $\mathcal P({\star})$. But then the law of excluded middle $\forall\phi\in\mathsf{Prop}. \, \phi\lor\neg\phi$. becomes $\forall S\in \mathcal P({\star}).\, S\lor\neg S$. But this latter statement does not make sense, since $S$ is a mathematical object and not a statement. Thus one can’t use $S$ to express a statement like “$S\lor\neg S$”. That’s where my confusion comes from. I hope you can clarify. 1. It is traditional (and in my opinion misleading) to refer to the object of all truth values as “the object of propositions”. I do not know why that is. In Coq the object is called Prop, for instance. In a topos it is $\Omega$, and it is isomorphic to $\mathcal{P}(\lbrace * \rbrace)$. You are mistaken about excluded middle. It is not expressed as $\forall S \in \mathcal{P}(\lbrace * \rbrace) . S \lor \lnot S$ but rather as \mathcal{P}(\lbrace * \rbrace) . \star in S \lor \lnot (\star in S)$. In any case, this is a technicality which can be dealt with in many ways. For instance, one can observe that $\mathcal{P}(\lbrace * \rbrace)$ is a complete Heyting algebra under the $\subseteq$ ordering, and that logical formulas denote elements of this Heyting algebra. Then, once can really write $S \lor \lnot S$, and this would mean the same thing as $S \cup (\lbrace{\star\rbrace \setminus S)$. I recommend that you consult a textbook such as Scott & Lambek’s “Introduction to Higher-Order Categorical Logic”. They explain these things there. You will also learn from that textbook about internal languages, which allow us to mix syntactic forms with semantic objects. Comments are closed.
__label__pos
0.992962
cover image Web Media Query Breakpoints with Styled Components In every responsive web project, you need media queries to adapt to different screen sizes. Styled components are a popular way to write CSS styles in React applications. This article proposes a way to write media queries with the styled components library Emotion. CSS Pixels In order to understand the values of the CSS breakpoints used, it is important to understand CSS pixels. CSS pixels are different from actual screen resolutions for most devices. For example, the Samsung Galaxy S8 mobile phone has a display with physical resolutions of 1440x2960. In CSS, the browser will respond with a resolution of 360x740. These are the device independent pixels. A related metric is PPI, which are the pixels per inch on physical space. Smartphones have a small space, but often a high resolution. So their PPI is quite high compared to normal desktop screens. The website MyDevice has a very good comparison between physical pixels and CSS pixels. Breakpoints So when using CSS queries, it is necessary to align the breakpoints with the CSS pixels. This is one snippet I found in a Gatsby starter project (Novela) that I find useful: mediaQueries.ts import { css } from '@emotion/core' const breakpoints = [ ['phone_small', 320], ['phone', 376], ['phablet', 540], ['tablet', 735], ['desktop', 1070], ['desktop_medium', 1280], ['desktop_large', 1440], ] const toEm = (size: number) => size / 16 + 'em' const mediaQueries = breakpoints.reduce( (acc, [label, size], i) => ({ ...acc, // max-width media query e.g. mediaqueries.desktop [label]: (...args) => css` @media (max-width: ${toEm(size)}) { ${css(...args)}; } `, // min-width media query e.g. mediaqueries.desktop_up // This is the breakpoint prior's size +1 [`${label}_up`]: (...args) => css` @media (min-width: ${toEm(breakpoints[i - 1][1] + 1)}) { ${css(...args)}; } `, }), {} ) export default mediaQueries The breakpoints array defines the different breakpoints in CSS pixels and their associated devices. Styled Components with Media Queries Styled components are React components that have a style associated with them. Emotion packages styled components in the package emotion/styled. It is based on the template literal string syntax of JavaScript. So you can embed media queries in your styled components like so: import styled from '@emotion/styled' import mediaQueries from './mediaQueries' const Button = styled.button` ${mediaQueries.phablet` color: black; `} color: turquoise; ` So this results in a button that has a media query of max-width of the phablet CSS pixels (540 pixels). The reverse would look like this: import styled from '@emotion/styled' import mediaQueries from './mediaQueries' const Button = styled.button` ${mediaQueries.phablet_up` color: black; `} color: turquoise; ` This result in a media query that has a min-width of the phone CSS plus one, so everything above a phone resolution is black. This saves you the hassle of writing all the media queries manually and having to set the breakpoints manually. Conclusion Media queries breakpoints need not be managed manually, but rather a simple library function can be used instead. References • Emotion: https://emotion.sh • MyDevice: https://www.mydevice.io/ • Gatsby Starter Novela: https://novela.narative.co Published 15 May 2020 Thomas Derflinger Written by Thomas Derflinger I am a visionary entrepreneur and software developer. In this blog I mainly write about web programming and related topics like IoT.
__label__pos
0.8315
devxlogo Infotainment Definition Infotainment is a combination of the words “information” and “entertainment,” representing a category of media content that aims to educate and entertain simultaneously. It blends factual information with engaging presentations, such as incorporating humor, visuals, or storytelling. Examples of infotainment can be found in various mediums like television shows, documentaries, podcasts, and online articles or videos. Phonetic The phonetic spelling of the keyword “Infotainment” is:/ˌɪnfoʊˈteɪnmənt/ Key Takeaways 1. Infotainment systems combine various types of in-car entertainment and communication features, such as radio, GPS navigation, and smartphone connectivity, to provide a seamless and enjoyable user experience while driving. 2. Modern infotainment systems also support advanced driver-assistance features, such as rearview cameras, lane departure warnings, and adaptive cruise control, enhancing safety and convenience on the road. 3. As technology continues to evolve, infotainment systems will likely become more integrated with other connected devices and services, making driving more efficient, enjoyable, and personalized to the user’s preferences. Importance Infotainment is an important technology term as it signifies the convergence of information and entertainment in a single platform, offering users a seamless and engaging experience. This blend of content delivery has transformed the way individuals consume media, particularly in industries like automotive, broadcasting, and consumer electronics. Infotainment systems have contributed to the enhancement of user experience, improved accessibility to diverse content, and support for various multimedia formats, making it a key element in the digital landscape. By merging practical information with entertainment, infotainment fosters greater user satisfaction and convenience, facilitating a stronger connection between people and technology. Explanation Infotainment serves as a vital feature in modern lifestyle, blending information and entertainment to engage the users while offering convenience, and improving the overall user experience. This innovative fusion caters to various sectors, including automotive, mass media, and personal devices. One of the key purposes of infotainment is to deliver a seamless and engaging experience while consuming content, such as news, media, and general information. By incorporating interactive elements and visually appealing media, infotainment systems can enrich the users’ learning and retention process, empowering them to make informed decisions in different spheres of their lives. Additionally, it eases the information consumption process and makes it more accessible to a wider range of people. In the automotive industry, for example, infotainment systems have redefined the way drivers and passengers interact with their vehicles. Modern cars are now equipped with sophisticated systems that amalgamate navigation, communication, multimedia, and vehicle diagnostics. From GPS-enabled navigation, voice command systems to real-time updates on weather and traffic conditions, infotainment systems offer a plethora of functionalities that add to the user’s convenience and safety on the road. Meanwhile, the media and entertainment industries utilize infotainment via various platforms – television shows, radio programs, podcasts, and mobile applications – to engage and retain audience attention. Through captivating storytelling and interactive elements, infotainment also plays a significant role in edutainment, delivering knowledge and information in an entertaining manner. Thus, infotainment has emerged as a versatile approach that bridges information and entertainment to cater to the diverse needs of users across different industries. Examples of Infotainment Tesla Model 3 Infotainment SystemThe Tesla Model 3 features an advanced infotainment system that combines entertainment and information in a single interface. It includes a 15-inch touchscreen display, which allows users to access various features like vehicle controls, music streaming, navigation, and Tesla-specific applications such as Supercharger and Destination Charger locations. Additionally, the system supports over-the-air software updates for continuous improvement and integration of new features. Apple CarPlay and Android AutoApple CarPlay and Android Auto are both infotainment systems that enable users to connect their smartphones to their vehicles and control various applications through the car’s built-in display. These systems allow for seamless integration of features like voice recognition, navigation, music playback, messaging, and telephony. This allows drivers to stay connected and informed while also minimizing distractions on the road. Audi MMI (Multi Media Interface)The Audi MMI is an infotainment system found in various Audi vehicles, providing a user-friendly interface for drivers and passengers to access entertainment, navigation, and vehicle settings. It features a high-resolution display and a control panel with touch-sensitive buttons, which allow users to control audio, climate, and communication functions. The system can also be controlled via voice commands, providing a hands-free experience for safer driving. The Audi MMI system is constantly updated and improved through software updates, ensuring that users have access to the latest features and functionalities. Infotainment FAQ 1. What is Infotainment? Infotainment is a combination of information and entertainment content, which is designed to educate, inform, and amuse the audience. It often refers to the media content, including TV shows, movies, or digital platforms, that blend education and entertainment to deliver informative and entertaining experiences. 2. What are the benefits of Infotainment? Infotainment has several benefits, including providing educational content in an engaging format, making learning enjoyable and more accessible, increasing the retention of information, and offering a platform for creative expression and innovation. It can also help in generating awareness and sparking discussions on various topics. 3. What are some examples of Infotainment? Examples of infotainment include educational TV shows, documentaries, edutainment games, podcasts, informative YouTube channels, and interactive websites that combine learning with entertainment elements. Popular infotainment examples include Bill Nye the Science Guy, How It’s Made, Jeopardy!, and TED Talks. 4. How does Infotainment differ from traditional education? While traditional education focuses primarily on delivering information through a structured curriculum, infotainment emphasizes the importance of infusing entertainment into educational content. This approach helps make the learning process more engaging and enjoyable, often resulting in better retention and understanding of the material. 5. Can Infotainment be used as a primary source of education? Infotainment is an excellent supplement to traditional education, but it should not be relied upon as the sole source of learning. It can help reinforce concepts learned through formal education, but it may not cover all the required topics or go into the depth necessary for a comprehensive understanding. Hence, it is recommended to use infotainment as a complementary learning tool in combination with traditional education sources. Related Technology Terms • Telematics • Smart Device Integration • In-car Entertainment System • Navigation System • Voice Recognition Sources for More Information devxblackblue About The Authors The DevX Technology Glossary is reviewed by technology experts and writers from our community. Terms and definitions continue to go under updates to stay relevant and up-to-date. These experts help us maintain the almost 10,000+ technology terms on DevX. Our reviewers have a strong technical background in software development, engineering, and startup businesses. They are experts with real-world experience working in the tech industry and academia. See our full expert review panel. These experts include: devxblackblue About Our Editorial Process At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere. See our full editorial policy. More Technology Terms Technology Glossary Table of Contents
__label__pos
0.983841
Splunk® SOAR (On-premises) Administer Splunk SOAR (On-premises) Acrobat logo Download manual as PDF Acrobat logo Download topic as PDF Create custom severity names Severity defines the impact or importance of an event or case. Different severity names have different assigned service level agreements in the Response page. ships with three predefined severity names: High, Medium, and Low. Your organization might need additional levels of severity to match your business processes. Additional severity names can be defined by a administrator. You can create up to 10 severities in . Create a severity in To create a severity, follow these steps: 1. From the Home menu, select Administration. 2. Select Event Settings > Severity. 3. Click Add Item. 4. Enter the severity name and select a color from the drop-down list. The severity name must adhere to the following conditions: • Only ASCII characters a-z, 0-9, dash ( - ), or underscores ( _ ) are allowed. • The name cannot exceed 20 characters in length. 5. Click Done. Severity names cannot be edited. To change a severity name, delete it and recreate the severity name. To reorder severity names, drag the handle ( ☰ ) on the left side of the severity name's input box to the desired position. To set the severity name used as the default severity, select the desired name from the drop-down list. Delete a severity name in To delete a severity name, click the circled x ( ⓧ ) to the right of the severity name's input box. Take note of the following behaviors before you delete a severity: • The severity label set as the default severity cannot be removed until a new default is selected. • Deleting a severity name does not change the severity of a case, event, or artifact. Changing a severity name does not update closed events, cases, or artifacts. • Deleted severity names appear in search results as strikethrough text. • Severity names are stored in 's internal database. Deleting a severity name from the active severity list does not remove that severity name from the database. • To maintain backwards compatibility with apps and existing playbooks, if the severity names High, Medium, or Low have been deleted, ingestion apps and the REST API can still assign the severity High, Medium, and Low to events, containers, or artifacts. Last modified on 20 May, 2022 PREVIOUS Create custom status labels in   NEXT Create custom fields to filter events This documentation applies to the following versions of Splunk® SOAR (On-premises): 5.3.3, 5.3.4, 5.3.5, 5.4.0 Was this documentation topic helpful? You must be logged into splunk.com in order to post comments. Log in now. Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers. 0 out of 1000 Characters
__label__pos
0.842097
• Single Sign On with Windows NT and IIS and Domino Hi there, my dear peers, What solutions do you know for the following situation: A user has logged in into Windows NT using a certain name/password combination. He then starts his web browser and opens an application on an IIS web server (same domain) where he must be authenticated without entering... HenkeS0 pointsBadges: • Problem with Same Time Login After entering user name & password and press enter I can see a popup message i.e: "The same time server seems to be unavailable. Please try again later." I am facing this problem for many days. Can you please suggest me any ideas on how to resolve this issue? satish27505 pointsBadges: • Rename Lotus Quick Place Hi Anyone who knows if and how to rename a Lotus 8.5 Quick Place? - I'm talking about the physical file name of the place. Regards Theis Hagen Hansen TheisHansen5 pointsBadges: • Modify script to send an email when a field contains a certain value I have a form that is filled out and it emails a group and a single person notifying them of a returned item. I would like to modify the script to notify a third party if a field contains a certain value. Sub Querysave(Source As Notesuidocument, Continue As Variant) Dim session As New NotesSession... Matt0775 pointsBadges: • Broadcast in Domino administrator When I broadcast a message in Domino 8.5, it doesn't go to all logged on notes users. Why? How do I resolve this? techieca15 pointsBadges: • Monitor Internet bandwidth Good day, We are using Windows 2000 Small Business Server in our organization with Exchange, DNS and ISA server configured. 1. I want to monitor the internet usage by the users. 2. I want to take a report on monthly or daily basis how much MB/GB utilized by each user. Thanks for your reply in... GiriPrasad0 pointsBadges: • Scheduled event announcement send to invitees I have an announcement to make and also need to schedule the notification to the invitees. How can I create such event announcement in Lotus Notes 8.5? sallykcl765 pointsBadges: • How to do SFTP in Lotus Script schedule agent Could any one give the code for SFTP in Lotus Script My requirement is to copy the file using SFTP from Unix box to win 2008 server in some xyz location. I'm able to do the FTP for same process, now I need to switch to SFTP. 772620 pointsBadges: • BES with Domino In our company we use Lotus Notes as mail server. We have BlackBerry enterprise server for ours BlackBerry devices. Now the problem is then user from our company send e-mail to user who have BlackBerry, that user do not see from who this message is arrived. In from field he sees not an internet... 123456123456720 pointsBadges: • Restricting multiple url and doclink entry into fields I have two fields, Field A is for entering a single URL link, e.g "www.cnn.com" . I want to prevent users from entering another URL link into this field since already contains "www.cnn.com". How do I go about this? I have field B with is a Rich Text Lite field that allows only doclink. How can I... Oreoluwa275 pointsBadges: • What does this Lotus Script mean? What does this code snippet mean. Can anybody explain it to me? @Length(@Text(@Month(@Today))) = 2 & @Length(@Text(@Day(@Today))) < 2;:- Rajalakshmi08240 pointsBadges: • Lotus Script: Running an agent that updates RSS feeds HI, I am running an agent that updates my RSS feeds. When I run it locally it runs fine but when I schedule it to run on our server it doesn't retrieve anything from the Web, help! CODE Sub GetnUpdateFeed(Document As NotesDocument) On Error Goto errGetnUpdateFeed Dim strtext As String Dim session... Jellybean405 pointsBadges: • Delete attachment in LotusScript / Lotus Notes 8.5.3 I want to delete attachment using action button, user is going to provide no. (1 , 2 or 3..) and that attachment should be deleted. shivasanjay3,090 pointsBadges: • Modify field based on certain key in LotusScript / Lotus Notes 8.5.3: What would be the recommended way to modify the readers field based on certain key. If a readers field need to be updated frequently from the master document; how to achieve this efficiently. I have two plans: 1) I am thinking of doing this in querysave event by adding or removing names in the... shivasanjay3,090 pointsBadges: • Configuring Lotus Sametime 7.5.1 Is there a way to configure Lotus Sametime to show what you chat partner is typing before he sends the text? Or, a plugin that allows this option. JLiddle5 pointsBadges: • Lotus Sametime 7.5.1 contact list Is there any way to know who have added me in his contact list. Vivs5 pointsBadges: • Is it possible to check what users have blocked you in IBM Lotus Sametime? Hello is it possible to check on IBM Lotus Sametime what users have block you and they're status, even if you can´t chat with them?? Maybe with a newer version (I have 7.5) or with a plugin. Thanks!! TesterGDL5 pointsBadges: • How can you block other users from seeing you on IBM Lotus Sametime? Using IBM Lotus Sametime, is it possible to block users from seeing you online, but still see them? If so, how can it be done? Domino Ask The Experts435 pointsBadges: • Continually getting Lotus Notes Error Message Running 8.5.1Fp1 client One user (so far) continually gets this error: Notes IPC Async Message Processor (62). You can click OK and continue. Have deleted cache, bookmark, desktop. It will go away for a time but error returns. Tvickiecc265 pointsBadges: • Lotus Notes Error Message: File does not exist I'm getting this as an error message: Notes Error: File does not exist (quota Profile). What should I do? ronw5 pointsBadges: Forgot Password No problem! Submit your e-mail address below. We'll send you an e-mail containing your password. Your password has been sent to: To follow this tag... There was an error processing your information. Please try again later. REGISTER or login: Forgot Password? By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy Thanks! We'll email you when relevant content is added and updated. Following
__label__pos
0.790363
Logarithms and Exponents Multiple Choice Questions 3 PDF Download Learn logarithms and exponents MCQs, grade 9 math test 3 for online courses learning and test prep, laws of logarithm multiple choice questions and answers. Laws of logarithm revision test includes math worksheets to learn for admission in online colleges for math. Math multiple choice questions (MCQ) on log - log b can be written as with options log (a⁄b), log (a - b), log (b⁄a) and log a + log b, laws of logarithm quiz for competitive exam prep, viva interview questions with answers key. Free math study guide to learn laws of logarithm quiz to attempt multiple choice questions based test. MCQs on Logarithms and Exponents Quiz PDF Download Worksheets 3 MCQ. Log A - log B can be written as 1. log (A - B) 2. log (A⁄B) 3. log (B⁄A) 4. log A + log B B MCQ. Decimal form of 453.5 × 10−6 1. 0.04535 2. 4535 3. 0.0004535 4. 45.35 C
__label__pos
0.909587
Is there a boost method in steemit and if not why ? in #steemit2 months ago image.png I started using steemit from 2017 and it was the best platform for me and then the hive.blog came after that and it was identical to steemit and i started using it to cause it was not centralised as steemit and the idea was more safe to me and then i found that the echo system for it was more promising for me but steemit still can do more but i do not understand why it didn't work on it until now i really wants steemit to do more and please if i am wrong tell me maybe i am not seeing the big picture. Coin Marketplace STEEM 0.28 TRX 0.06 JST 0.039 BTC 36460.11 ETH 2428.68 USDT 1.00 SBD 3.75
__label__pos
0.840975
/[slime]/slime/swank.lisp ViewVC logotype Contents of /slime/swank.lisp Parent Directory Parent Directory | Revision Log Revision Log Revision 1.2 - (show annotations) Thu Sep 4 11:41:59 2003 UTC (10 years, 6 months ago) by lukeg Branch: MAIN Changes since 1.1: +19 -10 lines Completion now works for internal symbols. i.e. 'package::foo' can be used to complete non-exported symbols. Some cleanups. 1 (defpackage :swank 2 (:use :common-lisp :wire) 3 (:export #:start-server #:evaluate #:lookup-notes 4 #:swank-compile-file #:arglist-string #:completions)) 5 6 (in-package :swank) 7 8 (defconstant server-port 4004 9 "Default port for the swank TCP server.") 10 11 (defconstant +internal-error+ 56) 12 (defconstant +condition+ 57) 13 (defconstant +ok+ 42) 14 15 (define-condition swank-error (simple-error) ()) 16 17 (defvar *notes-database* (make-hash-table :test #'equal) 18 "Database of recorded compiler notes/warnings/erros (keyed by filename). 19 Each value is a list of (LOCATION SEVERITY MESSAGE CONTEXT) lists. 20 LOCATION is a position in the source code (integer or source path). 21 SEVERITY is one of :ERROR, :WARNING, and :NOTE. 22 MESSAGE is a string describing the note. 23 CONTEXT is a string giving further details of where the error occured.") 24 25 (defvar *swank-debug-p* nil 26 "When true extra debug printouts are enabled.") 27 28 ;;; Setup and hooks. 29 30 (defun start-server (&optional (port server-port)) 31 (wire:create-request-server port nil :reuse-address t) 32 (setf c:*record-xref-info* t) 33 (ext:without-package-locks 34 (setf c:*compiler-notification-function* #'handle-notification)) 35 (when *swank-debug-p* 36 (format *debug-io* "~&Swank ready.~%"))) 37 38 (defun debugger-hook (condition old-hook) 39 "Hook function to be invoked instead of the debugger. 40 See CL:*DEBUGGER-HOOK*." 41 ;; FIXME: Debug from Emacs! 42 (declare (ignore old-hook)) 43 (handler-case 44 (progn (format *error-output* 45 "~@<SWANK: unhandled condition ~2I~_~A~:>~%" 46 condition) 47 (debug:backtrace 20 *error-output*) 48 (finish-output *error-output*)) 49 (condition () 50 nil))) 51 52 (defun handle-notification (severity message context where-from position) 53 "Hook function called by the compiler. 54 See C:*COMPILER-NOTIFICATION-FUNCTION*" 55 (let ((location (or (current-compiler-error-source-path) position)) 56 (namestring (cond ((stringp where-from) where-from) 57 ;; we can be passed a stream from READER-ERROR 58 ((lisp::fd-stream-p where-from) 59 (lisp::fd-stream-file where-from)) 60 (t where-from)))) 61 (when namestring 62 (push (list location severity message context) 63 (gethash namestring *notes-database*))))) 64 65 (defun current-compiler-error-source-path () 66 "Return the source-path for the current compiler error. 67 Returns NIL if this cannot be determined by examining internal 68 compiler state." 69 (let ((context c::*compiler-error-context*)) 70 (cond ((c::node-p context) 71 (reverse 72 (c::source-path-original-source (c::node-source-path context)))) 73 ((c::compiler-error-context-p context) 74 (reverse 75 (c::compiler-error-context-original-source-path context)))))) 76 77 ;;; Functions for Emacs to call. 78 79 ;;;; EVALUATE -- interface 80 81 (defun evaluate (string package) 82 "Evaluate an expression for Emacs." 83 (declare (type simple-string string)) 84 (when *swank-debug-p* 85 (format *debug-io* "~&;; SWANK:EVALUATE (~S) |~S|~%" package string)) 86 (handler-case 87 (send-value (eval (let ((debug::*debugger-hook* #'debugger-hook) 88 (*package* (find-package package))) 89 (read-from-string string)))) 90 (swank-error (condition) 91 (send-reply +condition+ 92 (format nil 93 (simple-condition-format-control condition) 94 (simple-condition-format-arguments condition)) 95 "")))) 96 ;; (error (condition) 97 ;; (send-and-log-internal-error condition)))) 98 99 ;;;; SWANK-COMPILE-FILE -- interface 100 101 (defun swank-compile-file (filename load-p) 102 (remhash filename *notes-database*) 103 (if (not (probe-file filename)) 104 (send-reply +condition+ "File does not exist" "") 105 (handler-case 106 (multiple-value-bind (output warnings failure) 107 (compile-file filename :load (read-from-string load-p)) 108 (send-value (list (and output (namestring output)) 109 warnings 110 failure))) 111 (reader-error (condition) 112 (send-condition condition)) 113 (end-of-file (condition) 114 (send-condition condition)) 115 (package-error (condition) 116 (send-condition condition)) 117 (c::compiler-error (condition) 118 (send-condition condition (current-compiler-error-source-path))) 119 (error (condition) 120 (format *debug-io* "~&Condition: ~S / ~S~%" (type-of condition) condition) 121 ;; Oops. 122 (send-and-log-internal-error condition))))) 123 124 (defun send-reply (status message result) 125 "Send a result triple over the wire to Emacs." 126 (declare (type integer status)) 127 (when *swank-debug-p* 128 (format *debug-io* "~&;; SWANK Reply: ~S, ~S, ~S~%" status message result)) 129 (wire-output-object *current-wire* status) 130 (wire-output-object *current-wire* message) 131 (wire-output-object *current-wire* result) 132 (wire-force-output *current-wire*)) 133 134 (defun send-value (value) 135 (send-reply +ok+ "ok" (prin1-to-string value))) 136 137 (defun send-condition (condition &optional result) 138 (send-reply +condition+ (princ-to-string condition) (prin1-to-string result))) 139 140 (defun send-and-log-internal-error (condition) 141 (format *debug-io* "~&Internal Swank Error: ~A~%" condition) 142 (send-reply +internal-error+ 143 (format nil "~&Internal Swank Error: ~A~%" condition) 144 "")) 145 146 ;;;; LOOKUP-NOTES -- interface 147 148 (defun lookup-notes (filename) 149 "Return the compiler notes recorded for FILENAME. 150 \(See *NOTES-DATABASE* for a description of the return type.)" 151 (gethash filename *notes-database*)) 152 153 ;;;; ARGLIST-STRING -- interface 154 155 (defun arglist-string (function) 156 "Return a string describing the argument list for FUNCTION. 157 The result has the format \"(...)\"." 158 (declare (type (or symbol function) function)) 159 (let ((arglist 160 (if (not (or (fboundp function) 161 (functionp function))) 162 "(-- <Unknown-Function>)" 163 (let* ((fun (etypecase function 164 (symbol (or (macro-function function) 165 (symbol-function function))) 166 (function function))) 167 (df (di::function-debug-function fun)) 168 (arglist (kernel:%function-arglist fun))) 169 (cond ((eval:interpreted-function-p fun) 170 (eval:interpreted-function-arglist fun)) 171 ((pcl::generic-function-p fun) 172 (pcl::gf-pretty-arglist fun)) 173 (arglist arglist) 174 ;; this should work both for 175 ;; compiled-debug-function and for 176 ;; interpreted-debug-function 177 (df (di::debug-function-lambda-list df)) 178 (t "(<arglist-unavailable>)")))))) 179 (if (stringp arglist) 180 arglist 181 (prin1-to-string arglist)))) 182 183 ;;;; COMPLETIONS -- interface 184 185 (defun completions (prefix package-name &optional only-external-p) 186 "Return a list of completions for a symbol's PREFIX and PACKAGE-NAME. 187 The result is a list of symbol-name strings. All symbols accessible in 188 the package are considered." 189 (let ((completions nil) 190 (package (find-package package-name))) 191 (when package 192 (do-symbols (symbol package) 193 (when (and (or (not only-external-p) (symbol-external-p symbol)) 194 (string-prefix-p prefix (symbol-name symbol))) 195 (push (symbol-name symbol) completions)))) 196 completions)) 197 198 (defun symbol-external-p (s) 199 (multiple-value-bind (_ status) 200 (find-symbol (symbol-name s) (symbol-package s)) 201 (declare (ignore _)) 202 (eq status :external))) 203 204 (defun string-prefix-p (s1 s2) 205 "Return true iff the string S1 is a prefix of S2. 206 \(This includes the case where S1 is equal to S2.)" 207 (and (<= (length s1) (length s2)) 208 (string= s1 s2 :end2 (length s1)))) 209   ViewVC Help Powered by ViewVC 1.1.5  
__label__pos
0.822617
动态规划-用编辑距离解释 数学基础 2019-01-24 6117 字 44 浏览 点赞 前言 什么是编辑距离呢? 我们把“从字符串A到字符串B”的最少操作次数称作编辑距离。操作包含了:插入,删除,替换。比如字符串A = “abc”,字符串B = “abcd”,那么从A到B只需要增加一个字母“d”,所以编辑距离为1;同样的,从B到A只需要删除“d”,所以编辑距离也为1。 状态转移 将需要求解的问题,转移成子问题的过程,叫做状态转移。刻画状态转移的表达式称为状态转移方程式 仍拿计算两个字符串的编辑距离为例。如果我们已知字符串A = “abc”,字符串B = “abc”,那么它们的编辑距离等于字符串“ab”与字符串“ab”的编辑距离。这是两个字符串最后一个字母相同时的处理方式,其他情况就得用其他方式。如已知 A = “abc” 与 B = “abcd”,求A和B之间的编辑距离。有以下方式做参考: 第一种方式,对字符串A插入操作,需要插入的值是B字符串的最后一个字母,所以问题变成了求“abcd”与“abcd”的编辑距离,现在最后一个字母相同,可以用之前得到的结论,继而问题成了求“abc”与“abc”的编辑距离。这样看来,其实是把最初的问题转移了:求“abc”与“abcd”编辑距离 = 求“abc”与“abc”的编辑距离 + 1。“+1”是因为我们对字符串A做了一个插入操作。 第二种方式,对字符串A删除操作。问题成了这样:求“abc”与“abcd”的编辑距离 = 求“ab”与“abcd”的编辑距离 + 1。 第三种方式,对字符串A替换操作。替换操作是比较隐晦的,不易看出来(对电脑而言),我们需要额外举例。现在字符串A = “abcd” 字符串B = “abce”,肉眼能够分辨,将字符串A最后一个字母“d”换成“e”,A就变成B了。可计算机没那么聪明,它需要一个字母一个字母的去比较。当同时去掉字符串A与字符串B的最后一个字母,如果剩下字符串相同,那么我们认为两个字符串之间的转换可以通过一个替换操作完成。 以上三种方式中,我们总是只保留结果最小的值。编辑距离的定义对此做了说明——需要最小操作数。每一次子问题中的最优解,保证了最终结果最优。 综上所述,得到状态转移方程式如下: """ strA strB 表示A B两个字符串 get_edit_distance(strA, strB) 表示计算strA、strB之间的编辑距离 """ if strA[-1] == strB[-1]: # 当最后一个字母相同时 distance = get_edit_distance(strA[:-1], strB[:-1]) else: # 当最后一个字母不同时 distance = min( # 只保留最优解的结果 get_edit_distance(strA, strB[:-1]) + 1, # 插入操作 get_edit_distance(strA[:-1], strB) + 1, # 删除操作 get_edit_distance(strA[:-1], strB[:-1]) + 1, # 替换操作 ) 递归实现 有了上面的转移方程式,利用递归计算编辑距离就比较容易了(但它效率实在不高): def get_edit_distance(strA, str2): if len(strB) == 0: # 考虑字符串B为空字符串时 return len(strA) elif len(strA) == 0: # 考虑字符串A为空字符串时 return len(strB) else: if strA[-1] == strB[-1]: return get_edit_distance(strA[:-1], strB[:-1]) else: return min( get_edit_distance(strA, strB[:-1]) + 1, get_edit_distance(strA[:-1], strB) + 1, get_edit_distance(strA[:-1], strB[:-1]) + 1 ) 迭代实现 迭代其实是递归的反向实现。使用递归时,我们要从字符串的最后一个字母入手,迭代则需要从第一个字母入手。我们可以绘制一张表格,用于描述两个字符串的编辑距离的变化。这里字符串A = “mouuse”,字符串B = “mouse”。表格如下: 根据上述表格,我们可以很轻易得出A到B过程的子问题中的编辑距离。比如“mou”与“mouse”的编辑距离是2,“mouu”与“m”的编辑距离是3。我们接下来编写的程序,其目的就是去完成这张表。 这里有一个点需要注意,因为需要考虑空字符串,所以我们绘制的表格的大小不是len(strA) * len(strB),而是(len(strA)+1) * (len(strB)+1) 另外,Python创建一个二维数组可以用一个“土办法”: d = [[0]*(len(strB)+1) for __ in range(len(strA)+1)] 下面是完整实现: def get_edit_distance(strA, strB): d = [[0]*(len(strB)+1) for __ in range(len(strA)+1)] # 当 strB 为空字符串时 for i, __ in enumerate("a" + strA): # 'a' 为填充物,表示从 strA 为空字符串的时候开始 d[i][0] = i # 当 strA 为空字符串时 for j, __ in enumerate("a" + strB): # 'a' 为填充物,表示从 strB 为空字符串的时候开始 d[0][j] = j # 注意range的第一个参数是1,因为我们得从第二行第二列的位置开始填表 # 第一行与第一列已经在前面初始化了 for i in range(1, len(strA)+1): for j in range(1, len(strB)+1): if strA[i-1] == strB[j-1]: # 与递归不同,不是从最后一个字母开始比较 d[i][j] = d[i-1][j-1] else: # 在插入、删除、替换操作中保留最优解 d[i][j] = min(d[i][j-1]+1, d[i-1][j]+1, d[i-1][j-1]+1) return d[-1][-1] 代码优化 对上面代码继续分析。 当最后一个字母相同时,状态转移方式为d[i][j] = d[i-1][j-1],在表格中可以看到位置关系如下: 当最后一个字母不相同时,状态转移方程式为d[i][j] = min(d[i][j-1]+1, d[i-1][j]+1, d[i-1][j-1]+1),在表格中可以看到位置关系如下: 也就是说,计算两个字符串的编辑距离,其实只需要保留左、上相邻元素,以及左上角相邻元素。所以我们不需要二维数组,一个一维已经够用,这样可以降低空间复杂度。 这里我建议对两个输入字符串的长度做比较,选出最短,用来作为一维数组的长度,尽可能减少内存开销: def get_edit_distance(strA, strB): strMaxlen = strA if len(strA) >= len(strB) else strB strMinlen = strA if strMaxlen != strA else strB d = [0] * (len(strMinlen)+1) # ... 然后对这个一维数组初始化。此刻我们需要明白一维数组的意义,现在的一维数组表示:存放当较长字符串为空字符串时,较短字符串中每个子字符串的编辑距离。所以初始化如下: def get_edit_distance(strA, strB): # ... for i, __ in enumerate("a"+strMinlen): d[i] = i # ... 由于我们数组d现在是一维的,只能保留一个维度的数据,而且这个数组总是存放“旧数据”。比如当我们计算“mouu”与“mouse”的编辑距离时,数组d中存放的是“mou”与“mouse”的编辑距离。接下来是代码最核心部分,也就是如何保留需要的三个位置中的数据: def get_edit_distance(strA, strB): # ... for i in range(1, len(strMaxlen)+1): # 由于数组d中的数据总是`旧数据`,对于新一行的第二个元素,d[0]就是其左上角元素 leftTop = d[0] # 由于总是从第二个元素开始,所以第一个元素需要自己填写 d[0] = i # `for i in range(1, len(strMaxlen)+1)` # 等价于 `for i, __ in enumerate("a"+strMaxlen)` for j in range(1, len(strMinlen)+1): # 当前位置元素在被新数据替换前,是下个位置的左上角元素,因此用临时变量存储起来 temp = d[j] if strMaxlen[i-1] == strMinlen[j-1]: d[j] = leftTop else: d[j] = min(d[j-1]+1, d[j]+1, leftTop+1) leftTop = temp # ... 这里我想对“当前位置元素在被新数据替换前,是下个位置的左上角元素”说明一下。当我们的数组d中存放的数据是“4 3 2 1 1 2”时,新一行的数据填写会从左到右开始,所以当我们更新索引为n的元素之前,该位置上的值就是上一行中索引为n时的值。 完整代码如下: def get_edit_distance(strA, strB): strMaxlen = strA if len(strA) >= len(strB) else strB strMinlen = strA if strMaxlen != strA else strB d = [0] * (len(strMinlen)+1) # 通过不断地刷新横排,达到空间优化地目的 for i, __ in enumerate("a"+strMinlen): d[i] = i for i in range(1, len(strMaxlen)+1): leftTop = d[0] # 充当左上角角色:d[i-1][j-1] d[0] = i # 每一行的最左值 for j in range(1, len(strMinlen)+1): temp = d[j] if strMaxlen[i-1] == strMinlen[j-1]: d[j] = leftTop else: d[j] = min(d[j-1]+1, d[j]+1, leftTop+1) leftTop = temp return d[len(strMinlen)] 动态规划 那么什么是动态规划呢? 答:在各种局部解中选出可能达到最优的局部解,放弃其他局部解。寻找最优解的过程就是动态规划。 上述计算两个字符串的编辑距离,就是在用动态规划思想。其核心是:状态转移方程式。当你能够找出合适的方程式时,动态规划的脉络随之清晰。 感谢 本文由 Guan 创作,采用 知识共享署名 3.0,可自由转载、引用,但需署名作者且注明文章出处。 还不快抢沙发 添加新评论
__label__pos
0.902094
Super User is a question and answer site for computer enthusiasts and power users. Join them; it only takes a minute: Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer 3. The best answers are voted up and rise to the top Few minutes ago while browsing internet with Firefox 8.0.1 for Windows XP (Professional 32bit) i saw that some pop up probably with system or Firefox information blinked (appeared and disappeared momentally) - it couldn't be ad popup because i was using facebook and Google.com when this happened. I didn't had time to read what was written, so i don't know what happened then. And i like to know - is there in Windows XP and later Windows systems such option for power users that logs every system/3rd party applications info pop-ups? And if it is, how it works and how can i turn it on? Cheers. share|improve this question up vote 2 down vote accepted My research didn't yield an automatic way to do this, but I'm hoping to leave breadcrumbs for folks who need to solve this in various ways. For your specific problem, the Sysinternals process monitor might be able to help you catch it in flight, but it's probably more heavyweight than what you're looking for, as I suspect leaving it running all the time would be pretty resource-intensive. Cobbling together something from existing parts might be tricky. GetWindowText lets you harvest the contents of a window, but it requires human intervention. You might be able to write an AutoHotKey script to run GetWindowText on all windows of a certain type, etc. It's pretty clear that all of the pieces are there to make an application that would do this. Microsoft has a Dialog Box Filter included with Windows Embedded that monitors constantly for any dialog box with a specific title, and suppresses it. I suspect that someone with more Windows development fu could probably crank something out relatively quickly that does exactly what you're describing. Most of them probably haven't created one because Visual Studio and kin probably have tools built in to trace window creation. WindowInterceptor appears to be source code that would be a good starting point. If I find something that's a better match, I'll update. And when I get 10 rep, I'll come back and fix the URLs. :-) share|improve this answer      Thanks for reply Royce :) The thing is even i could write such app - but will have to invest a lot of my time. But the question is - why MS didn't implemented it into Windows XP or at least newer? Or maybe such tool is integrated into Windows, but hidden from others and available only for selected people - like Microsoft's MVP's? Who knows? – spaffy Dec 11 '11 at 22:02 You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
__label__pos
0.551231
binScatterPlot Scatter plot of bins for tall arrays Description example binScatterPlot(X,Y) creates a binned scatter plot of the data in X and Y. The binScatterPlot function uses an automatic binning algorithm that returns bins with a uniform area, chosen to cover the range of elements in X and Y and reveal the underlying shape of the distribution. example binScatterPlot(X,Y,nbins) specifies the number of bins to use in each dimension. example binScatterPlot(X,Y,Xedges,Yedges) specifies the edges of the bins in each dimension using the vectors Xedges and Yedges. example binScatterPlot(X,Y,Name,Value) specifies additional options with one or more name-value pair arguments using any of the previous syntaxes. For example, you can specify 'Color' and a valid color option to change the color theme of the plot, or 'Gamma' with a positive scalar to adjust the level of detail. h = binScatterPlot(___) returns a Histogram2 object. Use this object to inspect properties of the plot. Examples collapse all Create two tall vectors of random data. Create a binned scatter plot for the data. When you perform calculations on tall arrays, MATLAB® uses either a parallel pool (default if you have Parallel Computing Toolbox™) or the local MATLAB session. To run the example using the local MATLAB session when you have Parallel Computing Toolbox, change the global execution environment by using the mapreducer function. mapreducer(0) X = tall(randn(1e5,1)); Y = tall(randn(1e5,1)); binScatterPlot(X,Y) Evaluating tall expression using the Local MATLAB Session: - Pass 1 of 1: Completed in 1.4 sec Evaluation completed in 2.7 sec Evaluating tall expression using the Local MATLAB Session: - Pass 1 of 1: Completed in 0.23 sec Evaluation completed in 0.32 sec The resulting figure contains a slider to adjust the level of detail in the image. Specify a scalar value as the third input argument to use the same number of bins in each dimension, or a two-element vector to use a different number of bins in each dimension. When you perform calculations on tall arrays, MATLAB® uses either a parallel pool (default if you have Parallel Computing Toolbox™) or the local MATLAB session. To run the example using the local MATLAB session when you have Parallel Computing Toolbox, change the global execution environment by using the mapreducer function. mapreducer(0) Plot a binned scatter plot of random data sorted into 100 bins in each dimension. X = tall(randn(1e5,1)); Y = tall(randn(1e5,1)); binScatterPlot(X,Y,100) Evaluating tall expression using the Local MATLAB Session: - Pass 1 of 1: Completed in 1.1 sec Evaluation completed in 1.5 sec Evaluating tall expression using the Local MATLAB Session: - Pass 1 of 1: Completed in 0.35 sec Evaluation completed in 0.51 sec Use 20 bins in the x-dimension and continue to use 100 bins in the y-dimension. binScatterPlot(X,Y,[20 100]) Evaluating tall expression using the Local MATLAB Session: - Pass 1 of 1: Completed in 0.21 sec Evaluation completed in 0.35 sec Evaluating tall expression using the Local MATLAB Session: - Pass 1 of 1: Completed in 0.12 sec Evaluation completed in 0.17 sec Plot a binned scatter plot of random data with specific bin edges. Use bin edges of Inf and -Inf to capture outliers. When you perform calculations on tall arrays, MATLAB® uses either a parallel pool (default if you have Parallel Computing Toolbox™) or the local MATLAB session. To run the example using the local MATLAB session when you have Parallel Computing Toolbox, change the global execution environment by using the mapreducer function. mapreducer(0) Create a binned scatter plot with 100 bin edges between [-2 2] in each dimension. The data outside the specified bin edges is not included in the plot. X = tall(randn(1e5,1)); Y = tall(randn(1e5,1)); Xedges = linspace(-2,2); Yedges = linspace(-2,2); binScatterPlot(X,Y,Xedges,Yedges) Evaluating tall expression using the Local MATLAB Session: - Pass 1 of 1: Completed in 1 sec Evaluation completed in 1.3 sec Use coarse bins extending to infinity on the edges of the plot to capture outliers. Xedges = [-Inf linspace(-2,2) Inf]; Yedges = [-Inf linspace(-2,2) Inf]; binScatterPlot(X,Y,Xedges,Yedges) Evaluating tall expression using the Local MATLAB Session: - Pass 1 of 1: Completed in 0.29 sec Evaluation completed in 0.4 sec Plot a binned scatter plot of random data, specifying 'Color' as 'c'. When you perform calculations on tall arrays, MATLAB® uses either a parallel pool (default if you have Parallel Computing Toolbox™) or the local MATLAB session. To run the example using the local MATLAB session when you have Parallel Computing Toolbox, change the global execution environment by using the mapreducer function. mapreducer(0) X = tall(randn(1e5,1)); Y = tall(randn(1e5,1)); binScatterPlot(X,Y,'Color','c') Evaluating tall expression using the Local MATLAB Session: - Pass 1 of 1: Completed in 3.8 sec Evaluation completed in 5.3 sec Evaluating tall expression using the Local MATLAB Session: - Pass 1 of 1: Completed in 0.32 sec Evaluation completed in 0.43 sec Input Arguments collapse all Data to distribute among bins, specified as separate arguments of tall vectors, matrices, or multidimensional arrays. X and Y must be the same size. If X and Y are not vectors, then binScatterPlot treats them as single column vectors, X(:) and Y(:). Corresponding elements in X and Y specify the x and y coordinates of 2-D data points, [X(k),Y(k)]. The underlying data types of X and Y can be different, but binScatterPlot concatenates these inputs into a single N-by-2 tall matrix of the dominant underlying data type. binScatterPlot ignores all NaN values. Similarly, binScatterPlot ignores Inf and -Inf values, unless the bin edges explicitly specify Inf or -Inf as a bin edge. Note If X or Y contain integers of type int64 or uint64 that are larger than flintmax, then it is recommended that you explicitly specify the bin edges.binScatterPlot automatically bins the input data using double precision, which lacks integer precision for numbers greater than flintmax. Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical Number of bins in each dimension, specified as a positive scalar integer or two-element vector of positive integers. If you do not specify nbins, then binScatterPlot automatically calculates how many bins to use based on the values in X and Y. • If nbins is a scalar, then binScatterPlot uses that many bins in each dimension. • If nbins is a vector, then nbins(1) specifies the number of bins in the x-dimension and nbins(2) specifies the number of bins in the y-dimension. Example: binScatterPlot(X,Y,20) uses 20 bins in each dimension. Example: binScatterPlot(X,Y,[10 20]) uses 10 bins in the x-dimension and 20 bins in the y-dimension. Bin edges in x-dimension, specified as a vector. Xedges(1) is the first edge of the first bin in the x-dimension, and Xedges(end) is the outer edge of the last bin. The value [X(k),Y(k)] is in the (i,j)th bin if Xedges(i)X(k) < Xedges(i+1) and Yedges(j)Y(k) < Yedges(j+1). The last bins in each dimension also include the last (outer) edge. For example, [X(k),Y(k)] falls into the ith bin in the last row if Xedges(end-1)X(k)Xedges(end) and Yedges(i)Y(k) < Yedges(i+1). Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical Bin edges in y-dimension, specified as a vector. Yedges(1) is the first edge of the first bin in the y-dimension, and Yedges(end) is the outer edge of the last bin. The value [X(k),Y(k)] is in the (i,j)th bin if Xedges(i)X(k) < Xedges(i+1) and Yedges(j)Y(k) < Yedges(j+1). The last bins in each dimension also include the last (outer) edge. For example, [X(k),Y(k)] falls into the ith bin in the last row if Xedges(end-1)X(k)Xedges(end) and Yedges(i)Y(k) < Yedges(i+1). Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical Name-Value Pair Arguments Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN. Example: binScatterPlot(X,Y,'BinWidth',[5 10]) Binning algorithm, specified as the comma-separated pair consisting of 'BinMethod' and one of these values. ValueDescription 'auto'The default 'auto' algorithm uses a maximum of 100 bins and chooses a bin width to cover the data range and reveal the shape of the underlying distribution. 'scott'Scott’s rule is optimal if the data is close to being jointly normally distributed. This rule is appropriate for most other distributions, as well. It uses a bin size of [3.5*std(X)*numel(X)^(-1/4), 3.5*std(Y)*numel(Y)^(-1/4)]. 'integers'The integer rule is useful with integer data, as it creates a bin for each integer. It uses a bin width of 1 and places bin edges halfway between integers. To avoid accidentally creating too many bins, you can use this rule to create a limit of 65536 bins (216). If the data range is greater than 65536, then the integer rule uses wider bins instead. Note The BinMethod property of the resulting Histogram2 object always has a value of 'manual'. Width of bins in each dimension, specified as the comma-separated pair consisting of 'BinWidth' and a scalar or two-element vector of positive integers, [xWidth yWidth]. A scalar value indicates the same bin width for each dimension. If you specify BinWidth, then binScatterPlot can use a maximum of 1024 bins (210) along each dimension. If instead the specified bin width requires more bins, then binScatterPlot uses a larger bin width corresponding to the maximum number of bins. Example: binScatterPlot(X,Y,'BinWidth',[5 10]) uses bins with size 5 in the x-dimension and size 10 in the y-dimension. Plot color theme, specified as the comma-separated pair consisting of 'Color' and one of these options. OptionDescription 'b' Blue 'm' Magenta 'c' Cyan 'r' Red 'g' Green 'y' Yellow 'k' Black Gamma correction, specified as the comma-separated pair consisting of 'Gamma' and a positive scalar. Use this option to adjust the brightness and color intensity to affect the amount of detail in the image. • gamma < 1 — As gamma decreases, the shading of bins with smaller bin counts becomes progressively darker, including more detail in the image. • gamma > 1 — As gamma increases, the shading of bins with smaller bin counts becomes progressively lighter, removing detail from the image. • The default value of 1 does not apply any correction to the display. Bin limits in x-dimension, specified as the comma-separated pair consisting of 'XBinLimits' and a two-element vector, [xbmin,xbmax]. The vector indicates the first and last bin edges in the x-dimension. binScatterPlot only plots data that falls within the bin limits inclusively, Data(Data(:,1)>=xbmin & Data(:,1)<=xbmax). Bin limits in y-dimension, specified as the comma-separated pair consisting of 'YBinLimits' and a two-element vector, [ybmin,ybmax]. The vector indicates the first and last bin edges in the y-dimension. binScatterPlot only plots data that falls within the bin limits inclusively, Data(Data(:,2)>=ybmin & Data(:,2)<=ybmax). Output Arguments collapse all Binned scatter plot, returned as a Histogram2 object. For more information, see Histogram2 Properties. Extended Capabilities Introduced in R2016b
__label__pos
0.839687
Rechercher un outil Domaine de Définition d'une Fonction Outil pour calculer le domaine de définition d'une fonction f(x), c'est-à-dire l'ensemble des valeurs x qui ont une image par la fonction f (à partir de l'équation de la fonction ou de sa courbe). Résultats Domaine de Définition d'une Fonction - Catégorie(s) : Fonctions Partager Partager dCode et plus dCode est gratuit et ses outils sont une aide précieuse dans les jeux, les maths, les énigmes, les géocaches, et les problèmes à résoudre au quotidien ! Une suggestion ? un problème ? une idée ? Écrire à dCode ! Rendez-vous sur notre communauté Discord dCode pour participer au forum d'entraide ! PS : Pour les messages codés, testez notre détecteur de chiffrement ! Remarques et suggestions sont les bienvenues afin que dCode propose le meilleur outil 'Domaine de Définition d'une Fonction' gratuit ! Merci ! Domaine de Définition d'une Fonction Calcul du Domaine de Définition Calcul du Domaine de Dérivation Réponses aux Questions (FAQ) Qu'est ce qu'un ensemble de définition d'une fonction ? (Définition) Une fonction $ f $ dans $ \mathbb{R} $, possède un ensemble de définition (ou domaine de définition), noté $ \mathcal{D}_f $ ou $ D_f $, qui est l'ensemble des nombres réels qui admettent une image par la fonction $ f $. Exemple : L'ensemble de définition de la fonction $ x^3 $ est $ \mathbb{R} = ] -\infty ; +\infty [ $ car tout nombre réel a une valeur au cube. L'ensemble de définition de la fonction $ \sqrt{x} $ est $ \mathbb{R^+} = [0;+\infty [ $ car seuls les réels positifs ou nuls ont une racine carrée. Comment trouver le domaine de définition d'une fonction ? Calculer l'ensemble de définition d'une fonction dans $ \mathbb{R} = ]-\infty ; +\infty [ $, c'est déterminer les valeurs pour lesquelles la fonction existe et celles pour lesquelles elle n'existe pas, c'est-à-dire toutes les valeurs de la variable $ x $ telles que $ f(x) $ n'est pas définie. A partir de l'équation de la fonction Il y a généralement 3 cas principaux de valeurs non définies (pour les fonctions réelles) : division par $ 0 $ (dénominateur nul), puisque $ 0 $ n'a pas d'inverse racine carrée négative : $ \sqrt{x} $ n'est défini que pour $ x \ge 0 $ dans $ \mathbb{R} $ logarithme négatif : $ \log(x) $ n'est défini que pour $ x > 0 $ dCode va calculer et vérifier les valeurs sans inverse par la fonction $ f $ et renvoyer l'intervalle correspondant au domaine de définition de la fonction. Exemple : Soit $ f(x) = \sqrt{1-2x} $, comme une racine ne peut pas être négative, calculer les valeurs telles que $ 1-2x \ge 0 \iff x \le 1/2 $. Ainsi $ f(x) $ existe si et seulement si $ x \le 1/2 $. Le domaine de définition s'écrit aussi $ D = ] -\infty ; 1/2 ] $ A partir de la courbe de la fonction Il s'agit de regarder les valeurs pour lesquelles la courbe n'a pas de point. Soit parce qu'il y a une asymptote verticale, soit parce qu'il n'y a aucune valeur définie. Que signifient les domaines R+ ou R- ou R* ? Afin de simplifier et raccourcir l'écriture des intervalles des domaines de définition, certains domaines sont abrégés ainsi: $ \mathbb{R} $ est le domaine des nombres réels, aussi noté $ ]-\infty ;+\infty [ $ $ \mathbb{R^+} $ (R plus) est le domaine des réels positifs (0 inclus), aussi noté $ [0;+\infty [ $ $ \mathbb{R^-} $ (R moins) est le domaine des réels négatifs (0 inclus), aussi noté $ ]-\infty; 0] $ $ \mathbb{R^*} $ (R étoile) est le domaine des réels privé de 0, c'est à dire tous les nombres réels mais en excluant la valeur 0, aussi noté $ ]-\infty; 0[ \cup ]0;+\infty [ $ $ \mathbb{R_+^*} $ (R étoile plus) est le domaine des réels positifs (0 exclus), aussi noté $ ]0;+\infty [ $ $ \mathbb{R_-^*} $ (R étoile moins) est le domaine des réels négatifs (0 exclus), aussi noté $ ]-\infty; 0[ $ $ \mathbb{R}\backslash\lbrace{n}\rbrace $ est le domaine des nombres réels privé du nombre $ n $, aussi noté $ ]-\infty; n[ \cup ]n;+\infty [ $ Qu'est ce qu'un antécédent ? Soit une fonction y = f(x) alors le nombre y s'appelle l’image de x, et x s'appelle un antécédent de y par la fonction f dans le domaine de définition D. Qu'est ce que le domaine d'existence d'une fonction ? Le domaine d'existence et le domaine de définition d'une fonction sont identiques, c'est le même concept. Quelle est la différence entre un ensemble de définition et un domaine de définition ? Un domaine ou un ensemble de définition sont 2 expressions qui désignent la même chose. Code source dCode se réserve la propriété du code source pour "Domaine de Définition d'une Fonction". Sauf code licence open source explicite (indiqué Creative Commons / gratuit), l'algorithme pour "Domaine de Définition d'une Fonction", l'applet ou snippet (convertisseur, solveur, chiffrement / déchiffrement, encodage / décodage, encryptage / décryptage, traducteur) ou les fonctions liées à "Domaine de Définition d'une Fonction" (calculer, convertir, résoudre, décrypter / encrypter, déchiffrer / chiffrer, décoder / encoder, traduire) codés en langage informatique (Python, Java, C#, PHP, Javascript, Matlab, etc.) ou les données, en téléchargement, script, ou les accès API à "Domaine de Définition d'une Fonction" ne sont pas publics, idem pour un usage hors ligne, PC, mobile, tablette, appli iPhone ou Android ! Rappel : dCode est gratuit. Citation Le copier-coller de la page "Domaine de Définition d'une Fonction" ou de ses résultats est autorisée (même pour un usage commercial) tant que vous créditez dCode ! L'exportation des résultats sous forme de fichier .csv ou .txt est gratuite en cliquant sur l'icone export Citer comme source bibliographique : Domaine de Définition d'une Fonction sur dCode.fr [site web en ligne], consulté le 10/09/2024, https://www.dcode.fr/domaine-definition-fonction Besoin d'Aide ? Rendez-vous sur notre communauté Discord dCode pour participer au forum d'entraide ! PS : Pour les messages codés, testez notre détecteur de chiffrement ! Questions / Commentaires Remarques et suggestions sont les bienvenues afin que dCode propose le meilleur outil 'Domaine de Définition d'une Fonction' gratuit ! Merci ! https://www.dcode.fr/domaine-definition-fonction © 2024 dCode — La 'boite à outils' indispensable qui sait résoudre tous les jeux / énigmes / géocaches / CTF.   Un problème ?
__label__pos
0.841968
Professional Learning Podcast Treasury Lesson Plans Personal Workspace Site Search ALEXville Learning Assets Home Courses of Study Home  |    Add Bookmark   |   Print Friendly   |   Rate This Lesson Plan   |   Suggest a Variation You may save this lesson plan to your hard drive as an html file by selecting "File", then "Save As" from your browser's pull down menu. The file name extension must be .html. This lesson provided by: Author: Valerie Harden System:Mobile County School:Dixon Elementary School Lesson Plan ID: 26362 Title: Share and Share Alike (Equal Parts) Overview/Annotation: This Five E’s AMSTI lesson plan equips students to divide an object into equal parts. A story and interactive whiteboard activity about sharing food demonstrate the idea of equal halves of circles, after which students attempt to half squares and rectangles, and explain their findings to the group.  Finally, children divide paper and electronic pizzas into 3 and 4 equal parts. This lesson plan was created by exemplary Alabama Math Teachers through the AMSTI project. Content Standard(s): MA2013(K) 18. Correctly name shapes regardless of their orientations or overall size. [K-G2] MA2013(1) 21. Partition circles and rectangles into two and four equal shares; describe the shares using the words halves, fourths, and quarters; and use the phrases half of, fourth of, and quarter of. Describe the whole as two of, or four of the shares. Understand for these examples that decomposing into more equal shares creates smaller shares. [1-G3] MA2013(2) 25. Partition a rectangle into rows and columns of same-size squares, and count to find the total number of them. [2-G2] MA2013(2) 26. Partition circles and rectangles into two, three, or four equal shares; describe the shares using the words halves, thirds, half of, a third of, etc.; and describe the whole as two halves, three thirds, or four fourths. Recognize that equal shares of identical wholes need not have the same shape. [2-G3] Local/National Standards: 2009 Mathematics ACOS Standards (Kindergarten): #5  Recognize that a whole object can be divided into parts. · Distinguishing parts of a whole as equal or not equal #8  Identify two-dimensional (plane) shapes, including rectangle, square, circle, triangle, hexagon, trapezoid, and rhombus, and three-dimensional (solid) figures, including sphere, cone, and cylinder.   National Council of Teachers of Mathematics (NCTM) Principles and Standards for School Mathematics: Process Standards: Reasoning and Proof: Select and use various types of reasoning and methods of proof.  Communication:  Communicate their mathematical thinking coherently and clearly to peers, teachers, and others. Primary Learning Objective(s): The students will be able to distinguish between equal and unequal parts of shapes, and problem-solve to find ways to divide shapes equally.  Additional Learning Objective(s): Students will develop problem solving and collaboration skills as they work together to find ways to equally divide shapes.  They will communicate their reasoning and problem solving strategies. Students will review the names of the plane shapes.  Approximate Duration of the Lesson: 31 to 60 Minutes Materials and Equipment: How Many Ways Can You Cut a Pie? By Jane Belk Moncure Five brown paper circles, divided (see “preparation”)5”  paper squares (3-4 per child) 4” x 6” paper rectangles (4-5 per child) Eating Fractions by Bruce McMillan Small (5”) paper plates (1-2 per child) The Little Mouse, the Red Ripe Strawberry and the Big Hungry Bear by Don Wood pattern blocks (included in AMSTI kit) Observation Checklist (attached) Technology Resources Needed: Interactive whiteboard SMART Notebook Interactive Viewer software  Computer with LCD projector Web access Student computer(s) with web access Background/Preparation:   Make five brown paper circles (about 6”).  Draw thick lines to divide one of them into even halves and four into 2 uneven pieces. Leave one blank. If you do not already have the SMART notebook software installed, take 5 minutes to download and install the free SMART interactive viewer.  Click on the top “download” button, fill in the required information, then click Run.  Follow the directions in the pop-up window.   Procedures/Activities: Engage: 1. Engage attention with dialog such as: “Have you ever eaten pie? Does one person ever eat a whole pie by herself?  Let’s see what happens to the pie in this book.”  Read How Many Ways Can You Cut a Pie? By Jane Belk Moncure 2. Ask: “Have you ever had to share a pizza or a cookie with someone?  Did they ever get a bigger piece than you did?”  Display the blank brown paper circle. “Let’s see if we can cut this “pie” into two pieces so that Frog and Mouse can each have the same amount.”  Cut off a small sector, so that the parts are obviously unequal.  Ask children if the two animals would get the same amount of pie?  How can they tell? Invite volunteers to show the class how they know the two parts are not the same.  3. Cut each of the other unevenly divided circles , asking children if the pieces are the same.  Finally, ask a child to show where you should cut.  Cut the circle into equal halves, and fold to show they are the same.  Introduce the word “equal”. 4. Display the Cutting Cookies Smart board activity (attached). Have a student pull a cookie apart by touching one “half” and sliding it away.  In unison, class says “equal” or “not equal”.  Flip one piece to check (double tap the piece, tap the arrow that appears, choose flip from the pull down menu).  Repeat for other cookies. Explore: 1. Explain that now that children are “experts” it is now their job to find ways to divide some other shapes into two equal parts.  Display a square and rectangle and review shape names. 2. Divide children into pairs.  Give each pair 6-8 squares.  Together they must find a way to cut a square into two equal  Both partners must agree that the parts are equal. parts. 3. As they work, ask questions such as “How can you tell if your parts are equal?  What strategies did you try?  Can you find another way to make two equal parts?” 4. Give each pair a supply of rectangles (6-8).  Challenge them to cut these into two equal parts. As they work, ask questions as above. Have them compare the dividing of squares to the dividing of the rectangles.  Ask questions such as “How was it alike?  How was it different?”  Explain: 1. Gather the children once again in a central location.  Have each pair bring with them the shapes they successfully divided into equal halves. 2. Begin with the squares.  Have one pair display the halves they brought to the rug.  Use questions such as the following to encourage children to communicate their reasoning and problem solving skills:  How do you know they are equal?  Show us how you and your partner figured it out.  Does everyone agree?  (If the parts are not actually equal, prompt other children to explain why) Did anyone figure it out a different way?  Can you show us? Raise your hand if you cut your square the same way they did.  Did anyone cut it a different way?  Show us.  How can you tell the parts are equal?  How many ways can we cut a square into two equal parts?  As students explain their thinking encourage them to use math vocabulary such as parts, halves, divide and equal.  Make sure student understand these concepts.  3. Repeat this dialogue for the rectangles.  Extend: 1. If time permits, read The Little Mouse, the Red Ripe Strawberry and the Big Hungry Bear by Don Wood, and discuss whether the strawberry is cut into equal parts. If it had been cut horizontally would the parts be equal? Display this Symmetry Website on the interactive board, and click the first two shapes to see them cut into equal halves.  Ask children to predict where each shape will be “cut” before clicking it. 2. Extend the learning to include 3-4 equal parts. Read Eating Fractions by Bruce McMillan.  Challenge children to cut squares and rectangles into 3 or 4 equal parts.  Discuss as above. 3. Have children color small paper plates to resemble pizzas.  Ask children who need a challenge to cut their “pizzas” into three equal parts, those who need extra practice to make two parts, and the rest to make four. 4. Use this web book I Want My Half to practice dividing objects into 2-5 equal parts. 5. For an extra challenge, remove the tan rhombi from the pattern blocks and put the remaining blocks in a center.  Ask students to figure out which pattern blocks are equal parts of which other ones (example, a red trapezoid block can be divided into three green triangle blocks). Evaluate: While circulating during the explore phase, observing students’ work and listening to their explanations, use the Observation Checklist (attached) or anecdotal records to note students’ level of mastery.  Use the rubric below to assign a score to each student. 4 -- Student found more than one way to equally divide each shape. 3 -- Student found one way to divide each shape, and clearly understands the difference between equal and unequal parts. 2 – Student divided one shape equally and the other unequally, OR divided the shapes almost equally. 1 – Student cannot distinguish between evenly and unevenly divided shapes Attachments:**Some files will display in a new window. Others will prompt you to download. CuttingCookies.notebook ObservationChecklist.doc Assessment Strategies: While circulating during the explore phase, observing students’ work and listening to their explanations, use the Observation Checklist (attached) or anecdotal records to note students’ level of mastery.  Use this rubric to assign a score to each student. 4 -- Student found more than one way to equally divide each shape. 3 -- Student found one way to divide each shape, and clearly understands the difference between equal and unequal parts. 2 – Student divided one shape equally and the other unequally, OR divided the shapes almost equally. 1 – Student cannot distinguish between evenly and unevenly divided shapes Extension: 1. Extend the learning to include 3-4 equal parts. Read Eating Fractions by Bruce McMillan.  Challenge children to cut squares and rectangles into 3 or 4 equal parts.  Discuss as above. 2. Have children color small paper plates to resemble pizzas.  Ask children who need a challenge to cut their “pizzas” into three equal parts, those who need extra practice to make two parts, and the rest to make four. 3. Use this web book I Want My Half to practice dividing objects into 2-5 equal parts. 4. For an extra challenge, remove the tan rhombi from the pattern blocks and put the remaining blocks in a center.  Ask students to figure out which pattern blocks are equal parts of which other ones (example, a red trapezoid block can be divided into three green triangle blocks). Remediation: Students who need extra practice can view this Fabulous Fractions web book.  The first two-thirds of the book deal with equal halves.   This Pizza Party learning game can be used to reinforce the idea of equal parts. Children count the number of people who need to share the pizza and watch as the pizza splits into that many equal pieces.  Each area below is a direct link to general teaching strategies/classroom accommodations for students with identified learning and/or behavior problems such as: reading or math performance below grade level; test or classroom assignments/quizzes at a failing level; failure to complete assignments independently; difficulty with short-term memory, abstract concepts, staying on task, or following directions; poor peer interaction or temper tantrums, and other learning or behavior problems. Presentation of Material Environment Time Demands Materials Attention Using Groups and Peers Assisting the Reluctant Starter Dealing with Inappropriate Behavior Be sure to check the student's IEP for specific accommodations. Variations Submitted by ALEX Users: Alabama Virtual Library Alabama Virtual Library Hosted by Alabama Supercomputer Authority The University of Alabama at Birmingham The University of Alabama at Birmingham The Malone Family Foundation The Malone Family Foundation Thinkfinity Thinkfinity Best of the Web Web Design by: Digital Mason LLC
__label__pos
0.99647
How to use devices method in Airtest Best Python code snippet using Airtest all_reduce.py Source:all_reduce.py Github copy Full Screen 1# Copyright 2017 The TensorFlow Authors. All Rights Reserved.2#3# Licensed under the Apache License, Version 2.0 (the "License");4# you may not use this file except in compliance with the License.5# You may obtain a copy of the License at6#7# http://www.apache.org/licenses/LICENSE-2.08#9# Unless required by applicable law or agreed to in writing, software10# distributed under the License is distributed on an "AS IS" BASIS,11# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.12# See the License for the specific language governing permissions and13# limitations under the License.14# ==============================================================================15"""Utilities to construct a TF subgraph implementing distributed All-Reduce."""16from __future__ import absolute_import17from __future__ import division18from __future__ import print_function19import collections20import math21from tensorflow.contrib import nccl22from tensorflow.python.framework import device as device_lib23from tensorflow.python.framework import ops24from tensorflow.python.ops import array_ops25from tensorflow.python.ops import math_ops26def _flatten_tensors(tensors):27 """Check tensors for isomorphism and flatten.28 Args:29 tensors: list of T @{tf.Tensor} which must all have the same shape.30 Returns:31 tensors: a list of T @{tf.Tensor} which are flattened (1D) views of tensors32 shape: the original shape of each element of input tensors33 Raises:34 ValueError: tensors are empty or non-isomorphic or have unknown shape.35 """36 if not tensors:37 raise ValueError("tensors cannot be empty")38 shape = tensors[0].shape39 for tensor in tensors:40 shape = shape.merge_with(tensor.shape)41 if not shape.is_fully_defined():42 raise ValueError("Tensors must have statically known shape.")43 if len(shape) != 1:44 reshaped = []45 for t in tensors:46 with ops.colocate_with(t):47 reshaped.append(array_ops.reshape(t, [-1]))48 tensors = reshaped49 return tensors, shape50def _reshape_tensors(tensors, shape):51 """Reshape tensors flattened by _flatten_tensors.52 Args:53 tensors: list of T @{tf.Tensor} of identical length 1D tensors.54 shape: list of integers describing the desired shape. Product of55 the elements must equal the length of each tensor.56 Returns:57 list of T @{tf.Tensor} which are the reshaped inputs.58 """59 reshaped = []60 for t in tensors:61 with ops.colocate_with(t):62 reshaped.append(array_ops.reshape(t, shape))63 return reshaped64def _padded_split(tensor, pieces):65 """Like split for 1D tensors but pads-out case where len % pieces != 0.66 Args:67 tensor: T @{tf.Tensor} that must be 1D.68 pieces: a positive integer specifying the number of pieces into which69 tensor should be split.70 Returns:71 list of T @{tf.Tensor} of length pieces, which hold the values of72 thin input tensor, in order. The final tensor may73 be zero-padded on the end to make its size equal to those of all74 of the other tensors.75 Raises:76 ValueError: The input tensor is not 1D.77 """78 shape = tensor.shape79 if 1 != len(shape):80 raise ValueError("input tensor must be 1D")81 tensor_len = shape[0].value82 with ops.colocate_with(tensor):83 if tensor_len % pieces != 0:84 # pad to an even length85 chunk_size = 1 + tensor_len // pieces86 if pieces > tensor_len:87 # This is an edge case that should not come up in practice,88 # i.e. a different reduction algorithm would be better,89 # but we'll make it work just for completeness.90 pad_len = pieces - tensor_len91 extended_whole = array_ops.concat(92 [tensor, array_ops.zeros([pad_len], dtype=tensor.dtype)], 0)93 parts = array_ops.split(extended_whole, pieces)94 return parts, pad_len95 elif (pieces - 1) * chunk_size >= tensor_len:96 # Another edge case of limited real interest.97 pad_len = (pieces * chunk_size) % tensor_len98 extended_whole = array_ops.concat(99 [tensor, array_ops.zeros([pad_len], dtype=tensor.dtype)], 0)100 parts = array_ops.split(extended_whole, pieces)101 return parts, pad_len102 else:103 last_chunk_size = tensor_len - (pieces - 1) * chunk_size104 pad_len = chunk_size - last_chunk_size105 piece_lens = [chunk_size for _ in range(pieces - 1)] + [last_chunk_size]106 parts = array_ops.split(tensor, piece_lens)107 parts[-1] = array_ops.concat(108 [parts[-1], array_ops.zeros([pad_len], dtype=tensor.dtype)], 0)109 return parts, pad_len110 else:111 return array_ops.split(tensor, pieces), 0112def _strip_padding(tensors, pad_len):113 """Strip the suffix padding added by _padded_split.114 Args:115 tensors: list of T @{tf.Tensor} of identical length 1D tensors.116 pad_len: number of elements to be stripped from the end of each tensor.117 Returns:118 list of T @{tf.Tensor} which are the stripped inputs.119 Raises:120 ValueError: tensors must be a non-empty list of 1D tensors, and121 each must be longer than pad_len.122 """123 if not tensors:124 raise ValueError("tensors cannot be empty")125 shape = tensors[0].shape126 if len(shape) > 1:127 raise ValueError("tensors must be 1D")128 prefix_len = int(shape[0] - pad_len)129 if prefix_len < 0:130 raise ValueError("pad_len longer than tensor")131 stripped = []132 for t in tensors:133 with ops.colocate_with(t):134 stripped.append(array_ops.slice(t, [0], [prefix_len]))135 return stripped136def _ragged_split(tensor, pieces):137 """Like split for 1D tensors but allows case where len % pieces != 0.138 Args:139 tensor: T @{tf.Tensor} that must be 1D.140 pieces: a positive integer specifying the number of pieces into which141 tensor should be split.142 Returns:143 list of T @{tf.Tensor} of length pieces, which hold the values of144 the input tensor, in order. The final tensor may be shorter145 than the others, which will all be of equal length.146 Raises:147 ValueError: input tensor must be 1D.148 """149 shape = tensor.shape150 if 1 != len(shape):151 raise ValueError("input tensor must be 1D")152 tensor_len = shape[0].value153 chunk_size = tensor_len // pieces154 with ops.colocate_with(tensor):155 if tensor_len != (pieces * chunk_size):156 # last piece will be short157 assert pieces > 1158 last_chunk_size = tensor_len - ((pieces - 1) * chunk_size)159 assert last_chunk_size > 0160 piece_lens = [chunk_size for _ in range(pieces - 1)] + [last_chunk_size]161 return array_ops.split(tensor, piece_lens)162 else:163 return array_ops.split(tensor, pieces)164def _ring_permutations(num_workers, num_subchunks, gpu_perm):165 """"Generate an array of device index arrays, one for each subchunk.166 In the basic ring reduction algorithm there are size(T)/num_devices167 data chunks and each device process one chunk per tick, i.e. sending168 one chunk and receiving one chunk. The idea of subchunking is that169 each device processes num_subchunks smaller data regions per tick,170 and the ring rank permutation is different for each subchunk index171 so that a device is potentially sending to and receiving from172 num_subchunks different other devices at each tick. Where multiple173 independent data channels exist between devices, this strategy174 supplies a method of using them in parallel.175 Args:176 num_workers: number of worker tasks177 num_subchunks: number of subchunks into which to divide each per-GPU chunk.178 gpu_perm: an array of integers in [0, num_gpus-1] giving the default179 ring order of GPUs at each worker. Other permutations will be generated180 by rotating this array and splicing together per-worker instances.181 Raises:182 ValueError: the number of subchunks may not exceed the number of GPUs.183 Returns:184 pred_by_s_d: list of lists that maps (by index) from (subchunk, dev) to185 preceding device in the permutation for that subchunk. The186 device index of GPU i at worker j is i + (j * num_gpus).187 rank_by_s_d: list of lists that maps (by index) from (subchunk, dev) to188 local rank of device d in the permutation for that subchunk.189 """190 num_gpus = len(gpu_perm)191 devices = num_workers * num_gpus192 if devices == 0:193 return [], []194 if num_subchunks > num_gpus:195 raise ValueError(196 "num_subchunks %d must be <= num_gpus %d" % (num_subchunks, num_gpus))197 rotation_interval = max(1, int(num_gpus / num_subchunks))198 perms_by_s = []199 for s in range(0, num_subchunks):200 full_order = []201 offset = s * rotation_interval202 for w in range(0, num_workers):203 default_order = [(w * num_gpus) + i for i in gpu_perm]204 dev_order = default_order[offset:] + default_order[:offset]205 full_order += dev_order206 perms_by_s.append(full_order)207 pred_by_s_d = [[-1 for d in range(0, devices)]208 for s in range(0, num_subchunks)]209 rank_by_s_d = [[-1 for d in range(0, devices)]210 for s in range(0, num_subchunks)]211 for s in range(0, num_subchunks):212 for d in range(0, devices):213 for t in range(0, devices):214 if d == perms_by_s[s][t]:215 rank_by_s_d[s][d] = t216 pred_by_s_d[s][d] = perms_by_s[s][(t + devices - 1) % devices]217 break218 return (pred_by_s_d, rank_by_s_d)219def build_ring_all_reduce(input_tensors, num_workers, num_subchunks,220 gpu_perm, red_op, un_op=None):221 """Construct a subgraph performing a ring-style all-reduce of input_tensors.222 Args:223 input_tensors: a list of T @{tf.Tensor} objects, which must all224 have the same shape and type.225 num_workers: number of worker tasks spanned by input_tensors.226 num_subchunks: number of subchunks each device should process in one tick.227 gpu_perm: a list of ints giving a ring-wise rank ordering of GPUs at228 each worker. All workers must have the same number of229 GPUs with the same rank ordering. If NVLINK is available, this should230 be a ring order supported by NVLINK edges.231 red_op: a binary operator for elementwise reduction.232 un_op: an optional unary operator to apply to fully reduced values.233 Raises:234 ValueError: empty input_tensors or they don't all have same235 size.236 Returns:237 a list of T @{tf.Tensor} identical sum-reductions of input_tensors.238 """239 if len(input_tensors) < 2:240 raise ValueError("input_tensors must be length 2 or longer")241 input_tensors, shape = _flatten_tensors(input_tensors)242 devices = [t.device for t in input_tensors]243 (pred_by_s_d, rank_by_s_d) = _ring_permutations(244 num_workers, num_subchunks, gpu_perm)245 chunks_by_dev, pad_len = _build_ring_gather(246 input_tensors, devices,247 num_subchunks, pred_by_s_d, rank_by_s_d, red_op)248 if un_op:249 chunks_by_dev = _apply_unary_to_chunks(un_op, chunks_by_dev)250 output_tensors = _build_ring_scatter(pred_by_s_d, rank_by_s_d,251 chunks_by_dev)252 if pad_len > 0:253 output_tensors = _strip_padding(output_tensors, pad_len)254 if len(shape) != 1:255 output_tensors = _reshape_tensors(output_tensors, shape)256 return output_tensors257def _build_ring_gather(input_tensors, devices, num_subchunks,258 pred_by_s_d, rank_by_s_d, red_op):259 """Construct a subgraph for the first (reduction) pass of ring all-reduce.260 Args:261 input_tensors: a list of T @{tf.Tensor} 1D input tensors of same262 shape and type.263 devices: array of device name strings264 num_subchunks: number of subchunks each device should process in one tick.265 pred_by_s_d: as produced by _ring_permutations266 rank_by_s_d: as produced by _ring_permutations267 red_op: a binary operator for elementwise reduction268 Raises:269 ValueError: tensors must all be one dimensional.270 Returns:271 list of list of T @{tf.Tensor} of (partially) reduced values where272 exactly num_subchunks chunks at each device are fully reduced.273 """274 num_devices = len(input_tensors)275 if num_devices == 0:276 return []277 if num_devices == 1:278 return input_tensors279 shape = input_tensors[0].shape280 if 1 != len(shape):281 raise ValueError("input tensors must be 1D")282 num_chunks = num_devices * num_subchunks283 num_ticks = num_devices - 1284 # Initialize chunks_by_dev with splits of the input tensors.285 chunks_by_dev = []286 split_pad_len = 0287 for d in range(0, num_devices):288 with ops.device(devices[d]):289 splits, split_pad_len = _padded_split(input_tensors[d], num_chunks)290 chunks_by_dev.append(splits)291 # Reduction phase292 for tick in range(0, num_ticks):293 # One new partial reduction for every chunk294 new_partial_reductions = [None for _ in range(0, num_chunks)]295 # Compute reductions with respect to last tick's values296 for d in range(0, num_devices):297 with ops.device(devices[d]):298 for s in range(0, num_subchunks):299 rank = rank_by_s_d[s][d]300 seg_index = (rank + num_devices - (2 + tick)) % num_devices301 pred_dev = pred_by_s_d[s][d]302 chunk_index = (seg_index * num_subchunks) + s303 new_partial_reductions[chunk_index] = red_op(304 chunks_by_dev[pred_dev][chunk_index],305 chunks_by_dev[d][chunk_index])306 # Update chunks_by_dev with the new values at the end of the tick.307 for d in range(0, num_devices):308 for s in range(0, num_subchunks):309 rank = rank_by_s_d[s][d]310 seg_index = (rank + num_devices - (2 + tick)) % num_devices311 chunk_index = (seg_index * num_subchunks) + s312 chunks_by_dev[d][chunk_index] = new_partial_reductions[chunk_index]313 return chunks_by_dev, split_pad_len314def _apply_unary_to_chunks(f, chunks_by_dev):315 """Apply a unary op to each tensor in chunks_by_dev, on same device.316 Args:317 f: a unary function over T @{tf.Tensor}.318 chunks_by_dev: list of lists of T @{tf.Tensor}.319 Returns:320 new list of lists of T @{tf.Tensor} with the same structure as321 chunks_by_dev containing the derived tensors.322 """323 output = []324 for x in chunks_by_dev:325 with ops.colocate_with(x[0]):326 output.append([f(t) for t in x])327 return output328def _build_ring_scatter(pred_by_s_d, rank_by_s_d,329 chunks_by_dev):330 """Construct subgraph for second (scatter) pass of ring all-reduce.331 Args:332 pred_by_s_d: as produced by _ring_permutations333 rank_by_s_d: as produced by _ring_permutations334 chunks_by_dev: list of list of T @{tf.Tensor} indexed by ints335 (device, chunk)336 Raises:337 ValueError: chunks_by_dev is not well-formed338 Returns:339 list of T @{tf.Tensor} which are the fully reduced tensors, one340 at each device corresponding to the outer dimension of chunks_by_dev.341 """342 num_devices = len(chunks_by_dev)343 num_chunks = len(chunks_by_dev[0])344 if 0 != num_chunks % num_devices:345 raise ValueError(346 "Expect number of chunks per device to be divisible by num_devices")347 num_subchunks = int(num_chunks / num_devices)348 num_ticks = num_devices - 1349 for tick in range(0, num_ticks):350 passed_values = [None for _ in range(0, num_chunks)]351 for d in range(0, num_devices):352 with ops.colocate_with(chunks_by_dev[d][0]):353 for s in range(0, num_subchunks):354 rank = rank_by_s_d[s][d]355 seg_index = (rank + num_devices - (1 + tick)) % num_devices356 pred_dev = pred_by_s_d[s][d]357 chunk_index = (seg_index * num_subchunks) + s358 passed_values[chunk_index] = array_ops.identity(359 chunks_by_dev[pred_dev][chunk_index])360 for d in range(0, num_devices):361 for s in range(0, num_subchunks):362 rank = rank_by_s_d[s][d]363 seg_index = (rank + num_devices - (1 + tick)) % num_devices364 chunk_index = (seg_index * num_subchunks) + s365 chunks_by_dev[d][chunk_index] = passed_values[chunk_index]366 # Join chunks at each device.367 output = []368 for x in chunks_by_dev:369 with ops.colocate_with(x[0]):370 output.append(array_ops.concat(x, 0))371 return output372def build_recursive_hd_all_reduce(input_tensors, red_op, un_op=None):373 """Construct a subgraph for recursive halving-doubling all-reduce.374 The recursive halving-doubling algorithm is described in375 http://www.mcs.anl.gov/~thakur/papers/ijhpca-coll.pdf376 The concept is to arrange the participating n devices in377 a linear sequence where devices exchange data pairwise378 with one other device in each round. During the gather379 phase there are lg(n) rounds where devices exchange380 increasingly smaller sub-tensors with another device381 at increasingly greater distances, until at the top382 each device has 1/n of the fully reduced values. During the383 scatter phase each device exchanges its fully reduced384 sub-tensor (which doubles in length at each round)385 with one other device at increasingly smaller distances386 until each device has all of the fully reduced values.387 Note: this preliminary version requires that len(input_tensors) be a388 power of 2. TODO(tucker): relax this restriction. Also, the389 number of elements in each tensor must be divisible by 2^h where h390 is the number of hops in each phase. This will also be relaxed in391 the future with edge-case specific logic.392 Args:393 input_tensors: list of T @{tf.Tensor} to be elementwise reduced.394 red_op: a binary elementwise reduction Op.395 un_op: an optional unary elementwise Op to apply to reduced values.396 Returns:397 list of T @{tf.Tensor} which are the fully reduced tensors, one398 at each device of input_tensors.399 Raises:400 ValueError: num_devices not a power of 2, or tensor len not divisible401 by 2 the proper number of times.402 """403 devices = [t.device for t in input_tensors]404 input_tensors, shape = _flatten_tensors(input_tensors)405 reduced_shards = _build_recursive_hd_gather(input_tensors, devices, red_op)406 if un_op:407 reduced_shards = [un_op(t) for t in reduced_shards]408 output_tensors = _build_recursive_hd_scatter(reduced_shards, devices)409 if len(shape) != 1:410 output_tensors = _reshape_tensors(output_tensors, shape)411 return output_tensors412def _build_recursive_hd_gather(input_tensors, devices, red_op):413 """Construct the gather phase of recursive halving-doubling all-reduce.414 Args:415 input_tensors: list of T @{tf.Tensor} to be elementwise reduced.416 devices: a list of strings naming the devices hosting input_tensors,417 which will also be used to host the (partial) reduction values.418 red_op: a binary elementwise reduction Op.419 Returns:420 list of T @{tf.Tensor} which are the fully reduced tensor shards.421 Raises:422 ValueError: num_devices not a power of 2, or tensor len not divisible423 by 2 the proper number of times.424 """425 num_devices = len(devices)426 num_hops = int(math.log(num_devices, 2))427 if num_devices != (2 ** num_hops):428 raise ValueError("num_devices must be a power of 2")429 chunks = input_tensors430 for h in range(0, num_hops):431 span = 2 ** h432 group_size = span * 2433 new_chunks = [[] for _ in devices]434 for d in range(0, num_devices):435 if (d % group_size) >= (group_size / 2):436 # skip right half of a pair437 continue438 left_dev = devices[d]439 right_dev = devices[d + span]440 left_split = array_ops.split(chunks[d], 2)441 right_split = array_ops.split(chunks[d+span], 2)442 with ops.device(left_dev):443 new_chunks[d] = red_op(left_split[0], right_split[0])444 with ops.device(right_dev):445 new_chunks[d + span] = red_op(left_split[1], right_split[1])446 chunks = new_chunks447 return chunks448def _build_recursive_hd_scatter(input_tensors, devices):449 """Construct the scatter phase of recursive halving-doublng all-reduce.450 Args:451 input_tensors: list of T @{tf.Tensor} that are fully-reduced shards.452 devices: a list of strings naming the devices on which the reconstituted453 full tensors should be placed.454 Returns:455 list of T @{tf.Tensor} which are the fully reduced tensors.456 """457 num_devices = len(devices)458 num_hops = int(math.log(num_devices, 2))459 assert num_devices == (2 ** num_hops), "num_devices must be a power of 2"460 chunks = input_tensors461 for h in reversed(range(0, num_hops)):462 span = 2 ** h463 group_size = span * 2464 new_chunks = [[] for _ in devices]465 for d in range(0, num_devices):466 if (d % group_size) >= (group_size / 2):467 # skip right half of a pair468 continue469 left_idx = d470 right_idx = d + span471 left_dev = devices[left_idx]472 right_dev = devices[right_idx]473 with ops.device(left_dev):474 new_chunks[left_idx] = array_ops.concat([chunks[left_idx],475 chunks[right_idx]], 0)476 with ops.device(right_dev):477 new_chunks[right_idx] = array_ops.concat([chunks[left_idx],478 chunks[right_idx]], 0)479 chunks = new_chunks480 return chunks481def build_shuffle_all_reduce(input_tensors, gather_devices, red_op, un_op=None):482 """Construct a subgraph for shuffle all-reduce.483 Shuffle reduce is essentially the algorithm implemented when using484 parameter servers. Suppose tensor length is n, there are d devices485 and g gather shards. Each device sends a n/g length sub-tensor to486 each gather shard. The gather shards perform a reduction across d487 fragments, then broadcast the result back to each device. The488 devices then join the g fully reduced fragments they receive from489 the shards. The gather shards could perform d-1 pairwise490 reductions, or one d-way reduction. The first is better where491 reduction Op time is low compared to transmission time, the second492 better in the other case.493 Args:494 input_tensors: list of T @(tf.Tensor} values to be reduced.495 gather_devices: list of names of devices on which reduction shards496 should be placed.497 red_op: an n-array elementwise reduction Op498 un_op: optional elementwise unary Op to be applied to fully-reduced values.499 Returns:500 list of T @{tf.Tensor} which are the fully reduced tensors.501 """502 input_tensors, shape = _flatten_tensors(input_tensors)503 dst_devices = [t.device for t in input_tensors]504 reduced_shards = _build_shuffle_gather(input_tensors, gather_devices,505 red_op, un_op)506 output_tensors = _build_shuffle_scatter(reduced_shards, dst_devices)507 if len(shape) != 1:508 output_tensors = _reshape_tensors(output_tensors, shape)509 return output_tensors510def _build_shuffle_gather(input_tensors, gather_devices, red_op, un_op=None):511 """Construct the gather (concentrate and reduce) phase of shuffle all-reduce.512 Args:513 input_tensors: list of T @(tf.Tensor} values to be reduced.514 gather_devices: list of names of devices on which reduction shards515 should be placed.516 red_op: the binary reduction Op517 un_op: optional elementwise unary Op to be applied to fully-reduced values.518 Returns:519 list of T @{tf.Tensor} which are the fully reduced shards.520 Raises:521 ValueError: inputs not well-formed.522 """523 num_source_devices = len(input_tensors)524 num_gather_devices = len(gather_devices)525 shape = input_tensors[0].shape526 if len(shape) != 1:527 raise ValueError("input_tensors must be 1D")528 shards_by_source = []529 for d in range(0, num_source_devices):530 with ops.colocate_with(input_tensors[d]):531 shards_by_source.append(532 _ragged_split(input_tensors[d], num_gather_devices))533 reduced_shards = []534 for d in range(0, num_gather_devices):535 with ops.device(gather_devices[d]):536 values = [s[d] for s in shards_by_source]537 red_shard = red_op(values)538 if un_op:539 red_shard = un_op(red_shard)540 reduced_shards.append(red_shard)541 return reduced_shards542def _build_shuffle_scatter(reduced_shards, dst_devices):543 """Build the scatter phase of shuffle all-reduce.544 Args:545 reduced_shards: list of T @(tf.Tensor} fully reduced shards546 dst_devices: list of names of devices at which the fully-reduced value547 should be reconstituted.548 Returns:549 list of T @{tf.Tensor} scattered tensors.550 """551 num_devices = len(dst_devices)552 out_tensors = []553 for d in range(0, num_devices):554 with ops.device(dst_devices[d]):555 out_tensors.append(array_ops.concat(reduced_shards, 0))556 return out_tensors557def _split_by_task(devices, values):558 """Partition devices and values by common task.559 Args:560 devices: list of device name strings561 values: list of T @{tf.tensor} of same length as devices.562 Returns:563 (per_task_devices, per_task_values) where both values are564 lists of lists with isomorphic structure: the outer list is565 indexed by task, and the inner list has length of the number566 of values belonging to that task. per_task_devices contains567 the specific devices to which the values are local, and568 per_task_values contains the corresponding values.569 Raises:570 ValueError: devices must be same length as values.571 """572 num_devices = len(devices)573 if num_devices != len(values):574 raise ValueError("len(devices) must equal len(values)")575 per_task_devices = collections.OrderedDict()576 per_task_values = collections.OrderedDict()577 for d in range(num_devices):578 d_spec = device_lib.DeviceSpec.from_string(devices[d])579 if not hasattr(d_spec, "task") or d_spec.task is None:580 assert False, "failed to parse device %s" % devices[d]581 index = (d_spec.job or "localhost", d_spec.replica or 0, d_spec.task)582 if index not in per_task_devices:583 per_task_devices[index] = []584 per_task_values[index] = []585 per_task_devices[index].append(devices[d])586 per_task_values[index].append(values[d])587 return (list(per_task_devices.values()), list(per_task_values.values()))588def build_nccl_all_reduce(input_tensors, red_op, un_op=None):589 """Build a subgraph that does one full all-reduce, using NCCL.590 Args:591 input_tensors: list of T @{tf.Tensor} of same-shape and type values to592 be reduced.593 red_op: binary elementwise reduction operator. Must be one of594 {tf.add}595 un_op: optional unary elementwise Op to apply to fully-reduce values.596 Returns:597 list of T @{tf.Tensor} of reduced values.598 Raises:599 ValueError: red_op not supported.600 """601 if red_op == math_ops.add:602 output_tensors = nccl.all_sum(input_tensors)603 else:604 raise ValueError("red_op not supported by NCCL all-reduce: ", red_op)605 if un_op:606 un_op_wrapped = []607 for t in output_tensors:608 with ops.colocate_with(t):609 un_op_wrapped.append(un_op(t))610 output_tensors = un_op_wrapped611 return output_tensors612def _build_nccl_hybrid(input_tensors, red_op, upper_level_f):613 """Construct a subgraph for NCCL hybrid all-reduce.614 Args:615 input_tensors: list of T @{tf.Tensor} of same-shape and type values to616 be reduced.617 red_op: binary elementwise reduction operator.618 upper_level_f: function for reducing one value per worker, across619 workers.620 Returns:621 list of T @{tf.Tensor} of reduced values.622 Raises:623 ValueError: inputs not well-formed.624 """625 input_tensors, shape = _flatten_tensors(input_tensors)626 devices = [t.device for t in input_tensors]627 per_worker_devices, per_worker_values = _split_by_task(devices, input_tensors)628 num_workers = len(per_worker_devices)629 up_values = [None for w in range(0, num_workers)]630 up_devices = up_values[:]631 down_values = up_values[:]632 # First stage: reduce within each worker using NCCL633 for w in range(0, num_workers):634 worker_values = build_nccl_all_reduce(per_worker_values[w], red_op)635 # NOTE: these reductions will not run to completion unless636 # every output value is used. Since we only need one, we637 # need to put control dependencies on the rest.638 with ops.control_dependencies(worker_values):639 with ops.device(worker_values[0].device):640 up_values[w] = array_ops.identity(worker_values[0])641 up_devices[w] = per_worker_devices[w][0]642 # Second stage: Apply upper_level_f to reduce across first device at643 # each worker644 level_2_output = upper_level_f(up_values)645 # Third stage: propagate within each worker using NCCL Broadcast646 for w in range(0, num_workers):647 dst_tensors = []648 with ops.device(per_worker_devices[w][0]):649 broadcast_src = nccl.broadcast(array_ops.identity(level_2_output[w]))650 for d in per_worker_devices[w]:651 with ops.device(d):652 dst_tensors.append(array_ops.identity(broadcast_src))653 down_values[w] = dst_tensors654 output_tensors = [v for sublist in down_values for v in sublist]655 if len(shape) != 1:656 output_tensors = _reshape_tensors(output_tensors, shape)657 return output_tensors658def _reduce_non_singleton(input_tensors, red_f, un_op):659 """If input_tensors has more than one element apply red_f, else apply un_op."""660 if len(input_tensors) > 1:661 return red_f(input_tensors)662 else:663 if not un_op:664 return input_tensors665 output_tensors = []666 for t in input_tensors:667 with ops.colocate_with(t):668 output_tensors.append(un_op(t))669 return output_tensors670def build_nccl_then_ring(input_tensors, subdiv, red_op, un_op=None):671 """Construct hybrid of NCCL within workers, Ring across workers."""672 def upper_builder(y):673 return build_ring_all_reduce(y, len(y), subdiv, [0], red_op, un_op)674 def upper_level_f(x):675 return _reduce_non_singleton(x, upper_builder, un_op)676 return _build_nccl_hybrid(input_tensors, red_op, upper_level_f)677def build_nccl_then_recursive_hd(input_tensors, red_op, un_op=None):678 """Construct hybrid of NCCL within workers, Recursive-HD across workers."""679 upper_level_f = lambda x: build_recursive_hd_all_reduce(x, red_op, un_op)680 return _build_nccl_hybrid(input_tensors, red_op, upper_level_f)681def build_nccl_then_shuffle(input_tensors, gather_devices, nccl_red_op,682 shuffle_red_op, un_op=None):683 """Construct hybrid of NCCL within workers, Shuffle across workers."""684 upper_level_f = lambda x: build_shuffle_all_reduce(x, gather_devices,685 shuffle_red_op, un_op)686 return _build_nccl_hybrid(input_tensors, nccl_red_op, upper_level_f)687def _build_shuffle_hybrid(input_tensors, gather_devices, red_op, upper_level_f):688 """Construct a subgraph for Shuffle hybrid all-reduce.689 Args:690 input_tensors: list of T @{tf.Tensor} of same-shape and type values to691 be reduced.692 gather_devices: list of device names on which to host gather shards.693 red_op: binary elementwise reduction operator.694 upper_level_f: function for reducing one value per worker, across695 workers.696 Returns:697 list of T @{tf.Tensor} of reduced values.698 Raises:699 ValueError: inputs not well-formed.700 """701 input_tensors, shape = _flatten_tensors(input_tensors)702 # First stage, reduce across each worker using gather_devices.703 devices = [t.device for t in input_tensors]704 per_worker_devices, per_worker_values = _split_by_task(devices, input_tensors)705 num_workers = len(per_worker_devices)706 up_values = []707 if len(gather_devices) != num_workers:708 raise ValueError("For shuffle hybrid, gather_devices must contain one "709 "device per worker. ")710 for w in range(0, num_workers):711 reduced_shards = _build_shuffle_gather(712 per_worker_values[w], [gather_devices[w]], red_op)713 up_values.append(reduced_shards[0])714 # Second stage, apply upper_level_f.715 level_2_output = upper_level_f(up_values)716 # Third stage, apply shuffle scatter at each worker.717 output_tensors = []718 for w in range(0, num_workers):719 output_tensors += _build_shuffle_scatter(720 [level_2_output[w]], per_worker_devices[w])721 if len(shape) != 1:722 output_tensors = _reshape_tensors(output_tensors, shape)723 return output_tensors724def build_shuffle_then_ring(input_tensors, gather_devices, subdiv,725 red_n_op, red_op, un_op=None):726 """Construct hybrid of Shuffle within workers, Ring across workers."""727 def upper_builder(tensors):728 return build_ring_all_reduce(tensors, len(tensors), subdiv, [0],729 red_op, un_op)730 def upper_level_f(tensors):731 return _reduce_non_singleton(tensors, upper_builder, un_op)732 return _build_shuffle_hybrid(733 input_tensors, gather_devices, red_n_op, upper_level_f)734def build_shuffle_then_shuffle(input_tensors, first_gather_devices,735 second_gather_devices, red_op, un_op=None):736 """Construct hybrid of Shuffle within workers, Shuffle across workers."""737 def upper_builder(tensors):738 return build_shuffle_all_reduce(tensors, second_gather_devices,739 red_op, un_op)740 def upper_level_f(tensors):741 return _reduce_non_singleton(tensors, upper_builder, un_op)742 return _build_shuffle_hybrid(... Full Screen Full Screen app.py Source:app.py Github copy Full Screen 1import paho.mqtt.client as mqtt2from flask import Flask,g,render_template,request,Response3from flask_cors import CORS4from flask_socketio import SocketIO, emit5from flask_apscheduler import APScheduler6from libDB import logDB,clearOldDB,queryDB7import numpy as np8import json9from libPickle import *10from pyimagesearch.motion_detection import SingleMotionDetector11from imutils.video import VideoStream12import threading13import argparse14import datetime15import imutils16import schedule17import time18import cv219#edit by chrome20"""21Date storage22- hot : pickle devices data23- cold :24- frozen : sqlite325""" 26class Config(object):27 SCHEDULER_API_ENABLED = True28 SECRET_KEY = 'secret!'29 threaded=True30 31scheduler = APScheduler()32# initialize the output frame and a lock used to ensure thread-safe33# exchanges of the output frames (useful for multiple browsers/tabs34# are viewing tthe stream)35outputFrame = None36lock = threading.Lock()37app = Flask(__name__)38app.config.from_object(Config())39CORS(app)40socketio = SocketIO(app)41# initialize the video stream and allow the camera sensor to42# warmup43#vs = VideoStream(usePiCamera=1).start()44vs = VideoStream(src=0).start()45# Create a dictionary called devices to store the device number, name, and device state:46devices = {47 'one' : {'floor' : 'fl1','position' : 'garage','object' : 'door', 'cmd' : '01', 'state' : 'False', 'type' : 'on/off', 'topic' : 'fl1/garage/door/01', 'temp' : '0', 'humid' : '0', 'th_enable' : 'False', 'time_enable' : 'False'},48 'two' : {'floor' : 'fl1','position' : 'garage','object' : 'door', 'cmd' : '02', 'state' : 'False', 'type' : 'latch', 'topic' : 'fl1/garage/door/02', 'temp' : '0', 'humid' : '0', 'th_enable' : 'False', 'time_enable' : 'False'},49 'three' : {'floor' : 'fl1','position' : 'com','object' : 'light', 'cmd' : '01', 'state' : 'False', 'type' : 'on/off', 'topic' : 'fl1/com/light/01', 'temp' : '0', 'humid' : '0', 'th_enable' : 'False', 'time_enable' : 'False'},50 'four' : {'floor' : 'fl1','position' : 'com','object' : 'light', 'cmd' : '02', 'state' : 'False', 'type' : 'on/off', 'topic' : 'fl1/com/light/02', 'temp' : '0', 'humid' : '0', 'th_enable' : 'False', 'time_enable' : 'False'},51 'five' : {'floor' : 'fl2','position' : 'liv','object' : 'fan', 'cmd' : '01', 'state' : 'False', 'type' : 'on/off', 'topic' : 'fl2/liv/fan/01', 'temp' : '0', 'humid' : '0', 'th_enable' : 'False', 'time_enable' : 'False'},52 'six' : {'floor' : 'fl2','position' : 'cen','object' : 'light', 'cmd' : '01', 'state' : 'False', 'type' : 'on/off', 'topic' : 'fl2/cen/light/01', 'temp' : '0', 'humid' : '0', 'th_enable' : 'False', 'time_enable' : 'False'},53 'seven' : {'floor' : 'fl3','position' : 'bed','object' : 'fan', 'cmd' : '01', 'state' : 'False', 'type' : 'on/off', 'topic' : 'fl3/bed/fan/01', 'temp' : '0', 'humid' : '0', 'th_enable' : 'False', 'time_enable' : 'False'}54 }55try:56 devices = load_pickle_obj('devices')57 # print(devices)58 print("devices exist")59except:60 save_pickle_obj(devices,'devices')61sensors = {62 'one' : {'name' : 'sensorName1'}63 }64jobCron = { 'startHour':'14',65 'startMinute':'00',66 'stopHour':'14',67 'stopMinute':'30'68 }69try:70 jobCron = load_pickle_obj('jobCron')71 # print(jobCron)72 print("jobCron exist")73except:74 save_pickle_obj(jobCron,'jobCron')75queryDatas = {'temp':{},76 'timeStamp':{},77 'humid':{}}78# Put the device dictionary into the template data dictionary:79templateData = {80 'devices' : devices,81 'sensors' : sensors,82 'jobCron': jobCron,83 'queryDatas':queryDatas84 }[email protected]('cron', id='do_job_on', hour=jobCron['startHour'], minute=jobCron['startMinute'])86def jobOn():87 print('Job on executed')88 for device in devices:89 if devices[device]['type'] == 'on/off':90 if devices[device]['time_enable'] == 'True':91 devices[device]['state'] = 'True'92 mqtt_client.publish(devices[device]['topic'],"on")93 print(device+' '+devices[device]['time_enable'])94 try:95 save_pickle_obj(devices,'devices')96 except:97 print("error")[email protected]('cron', id='do_job_off', hour=jobCron['stopHour'], minute=jobCron['stopMinute'])99def jobOff():100 print('Job off executed')101 for device in devices:102 if devices[device]['type'] == 'on/off':103 if devices[device]['time_enable'] == 'True':104 devices[device]['state'] = 'False'105 mqtt_client.publish(devices[device]['topic'],"off")106 print(device+' '+devices[device]['time_enable'])107 try:108 save_pickle_obj(devices,'devices')109 except:110 print("error")[email protected]('cron', id='do_job_DB', minute='*')112def jobDB():113 print('Job DB executed')114 for device in devices:115 if devices[device]['th_enable'] == 'True':116 logDB(devices[device]['topic'],devices[device]['temp'],devices[device]['humid'],"DHT","true")117 try:118 save_pickle_obj(devices,'devices')119 except:120 print("error")121 [email protected]('cron', id='do_job_ClrDB', day='*')123def jobClrDB():124 print('Job ClrDB executed')125 clearOldDB()126scheduler.init_app(app)127scheduler.start()128# The callback for when the client receives a CONNACK response from the server.129def on_connect(client, userdata, flags, rc):130 print("Connected with result code "+str(rc))131 # Subscribing in on_connect() means that if we lose the connection and132 # reconnect then subscriptions will be renewed.133 client.subscribe("fl1/#")# + single-level wild card, # multi-level wild card, 134 client.subscribe("fl2/#")135 client.subscribe("fl3/#")136 client.subscribe("#")137 138def whx(ownval,owndicts):139 for x in owndicts:140 if (ownval) in owndicts[x].values():141 return x142 return 'none'143# The callback for when a PUBLISH message is received from the ESP8266.144def on_message(client, userdata, message):145 # socketio.emit('my variable')146 print("Received message '" + str(message.payload.decode('utf8')) + "' on topic '"+ message.topic + "' with QoS " + str(message.qos))147 if message.topic.startswith('fl'):148 topicMsgSplit = message.topic.split("/")149 slash_count = message.topic.count("/")150 if slash_count>=3:151 topicMsg = topicMsgSplit[0]+"/"+topicMsgSplit[1]+"/"+topicMsgSplit[2]+"/"+topicMsgSplit[3]152 indexName = whx(topicMsg,devices)153 devices_list = list(devices)154 n_index = devices_list.index(indexName)155 if "/temperature" in message.topic:156 devices[indexName]['th_enable'] = 'True'157 devices[indexName]['temp'] = str(message.payload.decode('utf8'))158 socketio.emit('temp' , {'data': devices[indexName]['temp'],'pos':n_index})159 #print("temperature update")160 if "/humidity" in message.topic:161 devices[indexName]['th_enable'] = 'True'162 devices[indexName]['humid'] = str(message.payload.decode('utf8'))163 socketio.emit('humid' , {'data': devices[indexName]['humid'],'pos':n_index})164 # print("humidity update") 165 if "/feedback" in message.topic:166 if "on" in str(message.payload.decode('utf8')):167 if devices[indexName]['state'] == 'False':168 devices[indexName]['state'] = 'True'169 socketio.emit('refresh' , {})170 # print("refresh")171 172 else:173 if devices[indexName]['state'] == 'True':174 devices[indexName]['state'] = 'False'175 socketio.emit('refresh' , {})176 # print("refresh")177 # print("feedback update")178 else:179 print("mqtt message.topic error")180 elif message.topic.startswith('tele'):181 #print("tasmota")182 topicMsgSplit = message.topic.split("/")#[0]:tele,[1]:name,[2]:classify183 #topicMsg = topicMsgSplit[0]+"/"+topicMsgSplit[1]+"/"+topicMsgSplit[2]184 if topicMsgSplit[2] == "SENSOR":185 #print(topicMsgSplit[1])186 topic_json=json.dumps(str(message.payload.decode('utf8')))187 print(topic_json["ENERGY"])188 189 else:190 print("unknown topic: "+message.topic)191 192 try:193 save_pickle_obj(devices,'devices')194 except:195 print("error")196 197mqtt_client = mqtt.Client()198mqtt_client.on_connect = on_connect199mqtt_client.on_message = on_message200mqtt_client.connect("127.0.0.1",1883,60)201# mqtt_client.connect("192.168.2.46",1883,60)202mqtt_client.loop_start()[email protected]("/",methods=['GET', 'POST'])204def main():205 if request.method == 'GET':206 queryDatas['temp'] = queryDB("temp")207 queryDatas['humid'] = queryDB("humid")208 queryDatas['timeStamp'] = queryDB("timeStamp")209 # print(queryDatas['timeStamp'])210 templateData = {211 'devices' : devices,212 'sensors' : sensors,213 'jobCron': jobCron,214 'queryDatas':queryDatas215 }216 try:217 save_pickle_obj(devices,'devices')218 except:219 print("error")220 # Pass the template data into the template main.html and return it to the user221 return render_template('main.html', async_mode=socketio.async_mode, **templateData)222 elif request.method == 'POST':223 return 'post method do nothing'224 else:225 return 'method error'[email protected]("/debug")227def debug():228 templateData = {229 'devices' : devices,230 'sensors' : sensors,231 'jobCron': jobCron,232 'queryDatas':queryDatas233 }234 # Pass the template data into the template main.html and return it to the user235 return render_template('debug.html', async_mode=socketio.async_mode, **templateData)236# The function below is executed when someone requests a URL with the device number and action in it:[email protected]("/<device>/<floor>/<position>/<object>/<cmd>/<ctrl>",methods = ['POST', 'GET'])238def action(device, floor, position, object, cmd, ctrl):239 # Convert the device from the URL into an integer:240 #function = function#int(function)241 # Get the device name for the device being changed:242 # deviceName = devices[function]['name']243 # If the action part of the URL is "1" execute the code indented below:244 if ctrl == "on":245 mqtt_client.publish(devices[device]['topic'],"on")246 #print('mp '+devices[device]['topic'])247 devices[device]['state'] = 'True'248 # if ctrl == "0" and object == 'door':249 if ctrl == "off":250 mqtt_client.publish(devices[device]['topic'],"off")251 #print('mp '+devices[device]['topic'])252 devices[device]['state'] = 'False'253 if ctrl == "toggle":254 if devices[device]['state'] == 'True':255 mqtt_client.publish(devices[device]['topic'],"off")256 #print('mp '+devices[device]['topic'])257 devices[device]['state'] = 'False'258 else: 259 mqtt_client.publish(devices[device]['topic'],"on")260 #print('mp '+devices[device]['topic'])261 devices[device]['state'] = 'True'262 if ctrl == "click":263 mqtt_client.publish(devices[device]['topic'],"click")264 print('click '+devices[device]['topic'])265 devices[device]['state'] = 'False'266 267 # Along with the device dictionary, put the message into the template data dictionary:268 templateData = {269 'devices' : devices,270 'sensors' : sensors,271 'jobCron': jobCron,272 'queryDatas':queryDatas273 }274 try:275 save_pickle_obj(devices,'devices')276 except:277 print("error")278 return render_template('main.html', **templateData)[email protected]("/addSched/<device>")280def addSched(device):281 print('time_enable '+devices[device]['time_enable'])282 devices[device]['time_enable'] = 'True'283 templateData = {284 'devices' : devices,285 'sensors' : sensors,286 'jobCron': jobCron,287 'queryDatas':queryDatas288 }289 try:290 save_pickle_obj(devices,'devices')291 except:292 print("error")293 return render_template('main.html', **templateData)294 295 [email protected]("/rmSched/<device>")297def rmSched(device):298 print('time_enable '+devices[device]['time_enable'])299 devices[device]['time_enable'] = 'False'300 templateData = {301 'devices' : devices,302 'sensors' : sensors,303 'jobCron': jobCron,304 'queryDatas':queryDatas305 }306 try:307 save_pickle_obj(devices,'devices')308 except:309 print("error")310 return render_template('main.html', **templateData)[email protected]('/startTime',methods = ['POST', 'GET'])312def startTime():313 if request.method == 'POST':314 print(request.form)315 result1 = str(request.form['startTime1'])316 result2 = str(request.form['startTime2'])317 for job in scheduler.get_jobs():318 print(job.id)319 320 try:321 scheduler.scheduler.reschedule_job('do_job_on', trigger='cron', hour=result1, minute=result2)322 jobCron['startHour']=result1323 jobCron['startMinute']=result2324 except:325 pass326 templateData = {327 'devices' : devices,328 'sensors' : sensors,329 'jobCron': jobCron,330 'queryDatas':queryDatas331 }332 try:333 save_pickle_obj(devices,'devices')334 save_pickle_obj(jobCron,'jobCron')335 except:336 print("error")337 return render_template('main.html', **templateData)338 [email protected]('/stopTime',methods = ['POST', 'GET'])340def stopTime():341 if request.method == 'POST':342 result1 = str(request.form['stopTime1'])343 result2 = str(request.form['stopTime2'])344 for job in scheduler.get_jobs():345 print(job.id)346 347 try:348 scheduler.scheduler.reschedule_job('do_job_off', trigger='cron', hour=result1, minute=result2)349 jobCron['stopHour']=result1350 jobCron['stopMinute']=result2351 except:352 pass353 354 templateData = {355 'devices' : devices,356 'sensors' : sensors,357 'jobCron': jobCron,358 'queryDatas':queryDatas359 }360 try:361 save_pickle_obj(devices,'devices')362 save_pickle_obj(jobCron,'jobCron')363 except:364 print("error")365 return render_template('main.html', **templateData)[email protected]("/videoStream")367def videoStream():368 # return the rendered template369 return render_template("videoStream.html")370def detect_motion(frameCount):371 # grab global references to the video stream, output frame, and372 # lock variables373 global vs, outputFrame, lock374 # initialize the motion detector and the total number of frames375 # read thus far376 md = SingleMotionDetector(accumWeight=0.1)377 total = 0378 # loop over frames from the video stream379 while True:380 # read the next frame from the video stream, resize it,381 # convert the frame to grayscale, and blur it382 frame = vs.read()383 frame = imutils.resize(frame, width=400)384 gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)385 gray = cv2.GaussianBlur(gray, (7, 7), 0)386 # grab the current timestamp and draw it on the frame387 timestamp = datetime.datetime.now()388 cv2.putText(frame, timestamp.strftime("%A %d %B %Y %I:%M:%S%p"), 389 (10, frame.shape[0] - 10),cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)390 # if the total number of frames has reached a sufficient391 # number to construct a reasonable background model, then392 # continue to process the frame393 if total > frameCount:394 # detect motion in the image395 motion = md.detect(gray)396 # cehck to see if motion was found in the frame397 if motion is not None:398 # unpack the tuple and draw the box surrounding the399 # "motion area" on the output frame400 (thresh, (minX, minY, maxX, maxY)) = motion401 cv2.rectangle(frame, (minX, minY), (maxX, maxY),(0, 0, 255), 2)402 403 # update the background model and increment the total number404 # of frames read thus far405 md.update(gray)406 total += 1407 # acquire the lock, set the output frame, and release the408 # lock409 with lock:410 outputFrame = frame.copy()411 # outputFrame = gray.copy()412 413 414def generate():415 # grab global references to the output frame and lock variables416 global outputFrame, lock417 # loop over frames from the output stream418 while True:419 # wait until the lock is acquired420 with lock:421 # check if the output frame is available, otherwise skip422 # the iteration of the loop423 if outputFrame is None:424 continue425 # encode the frame in JPEG format426 (flag, encodedImage) = cv2.imencode(".jpg", outputFrame)427 # ensure the frame was successfully encoded428 if not flag:429 continue430 # yield the output frame in the byte format431 yield(b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + bytearray(encodedImage) + b'\r\n')[email protected]("/video_feed")433def video_feed():434 # return the response generated along with the specific media435 # type (mime type)436 return Response(generate(),mimetype = "multipart/x-mixed-replace; boundary=frame")437 [email protected]('my event')439def handle_my_custom_event(json):440 #print('received json data here: ' + str(json))441 pass442# function for responses443def results():444 # build a request object445 req = request.get_json(silent=True, force=True)446 print("Request:")447 print(json.dumps(req, indent=4))448 # print(req,flush=True)449 # fetch fulfillmentText from json450 fulfillmentText = req.get('queryResult').get('fulfillmentText')451 # print(req,flush=True)452 # print(fulfillmentText, flush=True)453 mqtt_client.publish("fl1/garage/door/02","slide")454 # return a fulfillment response455 #return {'fulfillmentText': 'This is a response from webhook.'}456 return 'results'[email protected]('/webhook',methods=['GET','POST'])458def webhook():459 if request.method == 'POST':460 return results()461 else:462 return 'Hello'463if __name__ == "__main__":464 t = threading.Thread(target=detect_motion, args=(64,))465 t.daemon = True466 t.start()467 socketio.run(app, host='0.0.0.0', port=80, debug=True, use_reloader=False)468 469# release the video stream pointer... Full Screen Full Screen path.py Source:path.py Github copy Full Screen ...158 List of devices ordered by coordinates159 """160 return sorted(self.devices, key=lambda dev: dev.md.z)161 @property162 def blocking_devices(self):163 """164 A list of devices that are currently inserted or are in unknown165 positions. This includes devices downstream of the first166 :attr:`.impediment`167 """168 # Cache important prior devices169 prior = None170 last_branches = list()171 block = list()172 for device in self.path:173 # If we have switched beamlines174 if prior and device.md.beamline != prior.md.beamline:175 # Find improperly configured optics176 for optic in last_branches:177 # If this optic is responsible for delivering beam178 # to this hutch and it is not configured to do so.179 # Mark it as blocking180 if device.md.beamline in optic.branches:181 if device.md.beamline not in optic.destination:182 block.append(optic)183 # Otherwise ensure it is removed from the beamline184 elif optic.md.beamline not in optic.destination:185 block.append(optic)186 # Clear optics that have been evaluated187 last_branches.clear()188 # If our last device was an optic, make sure it wasn't required189 # to continue along this beampath190 elif (prior in last_branches191 and device.md.beamline in prior.branches192 and device.md.beamline not in prior.destination):193 block.append(last_branches.pop(-1))194 # Find branching devices and store195 # They will be marked as blocking by downstream devices196 dev_state = find_device_state(device)197 if device in self.branches:198 last_branches.append(device)199 # Find inserted devices200 elif dev_state == DeviceState.Inserted:201 # Ignore devices with low enough transmssion202 trans = getattr(device, 'transmission', 1)203 if trans < self.minimum_transmission:204 block.append(device)205 # Find unknown and faulted devices206 elif dev_state != DeviceState.Removed:207 block.append(device)208 # Stache our prior device209 prior = device210 return block211 @property212 def incident_devices(self):213 """214 A list of devices the beam is currently incident on. This includes the215 current :attr:`.impediment` and any upstream devices that may be216 inserted but have more transmission than :attr:`.minimum_transmission`217 """218 # Find device information219 inserted = [d for d in self.path220 if find_device_state(d) == DeviceState.Inserted]221 impediment = self.impediment222 # No blocking devices, all inserted devices incident223 if not impediment:224 return inserted225 # Otherwise only return upstream of the impediment226 return [d for d in inserted if d.md.z <= impediment.md.z]227 def show_devices(self, file=None):228 """229 Print a table of the devices along the beamline230 Parameters231 ----------232 file : file-like object233 File to writable234 """235 # Initialize Table236 pt = PrettyTable(['Name', 'Prefix', 'Position', 'Beamline', 'State'])237 # Adjust Table settings238 pt.align = 'r'239 pt.align['Name'] = 'l'240 pt.align['Prefix'] = 'l'241 pt.float_format = '8.5'... Full Screen Full Screen vscsi_util.py Source:vscsi_util.py Github copy Full Screen ...136 devname = sg137 scsi_id = _vscsi_get_scsiid(sg)138 devices.append([hctl, devname, sg, scsi_id])139 return devices140def vscsi_get_scsidevices(mask="*"):141 """ get all scsi devices information """142 devices = _vscsi_get_scsidevices_by_lsscsi("[%s]" % mask)143 if devices or (len(mask) and mask[0] != "*"):144 # devices found or partial device scan145 return devices146 return _vscsi_get_scsidevices_by_sysfs()147def vscsi_get_hctl_and_devname_by(target, scsi_devices = None):148 if target.startswith('/dev/'):149 target = os.path.realpath(target)150 if scsi_devices is None:151 if len(target.split(':')) == 4:152 scsi_devices = _vscsi_get_scsidevices_by_lsscsi(target)153 elif target.startswith('/dev/'): 154 scsi_devices = _vscsi_get_scsidevices_by_lsscsi("| grep %s" % target)155 else:156 scsi_devices = _vscsi_get_scsidevices_by_lsscsi("")157 if not scsi_devices:158 scsi_devices = _vscsi_get_scsidevices_by_sysfs()159 if len(target.split(':')) == 4:160 return _vscsi_get_devname_by(target, scsi_devices)161 else:162 return _vscsi_get_hctl_by(target, scsi_devices)163def get_scsi_vendor(pHCTL):164 try:165 sysfs_mnt = utils.find_sysfs_mount() 166 sysfs_scsi_dev_path = \167 os.path.join(sysfs_mnt + SYSFS_SCSI_PATH, pHCTL)168 scsi_vendor = \169 os.popen('cat ' + sysfs_scsi_dev_path + \170 SYSFS_SCSI_DEV_VENDOR_PATH).read()171 return scsi_vendor.splitlines()[0]172 except:173 return None174def get_scsi_model(pHCTL):175 try:176 sysfs_mnt = utils.find_sysfs_mount() 177 sysfs_scsi_dev_path = \178 os.path.join(sysfs_mnt + SYSFS_SCSI_PATH, pHCTL)179 scsi_model = \180 os.popen('cat ' + sysfs_scsi_dev_path + \181 SYSFS_SCSI_DEV_MODEL_PATH).read()182 return scsi_model.splitlines()[0]183 except:184 return None185def get_scsi_typeid(pHCTL):186 try:187 sysfs_mnt = utils.find_sysfs_mount() 188 sysfs_scsi_dev_path = \189 os.path.join(sysfs_mnt + SYSFS_SCSI_PATH, pHCTL)190 scsi_typeid = \191 os.popen('cat ' + sysfs_scsi_dev_path + \192 SYSFS_SCSI_DEV_TYPEID_PATH).read()193 return int(scsi_typeid.splitlines()[0])194 except:195 return None196def get_scsi_revision(pHCTL):197 try:198 sysfs_mnt = utils.find_sysfs_mount() 199 sysfs_scsi_dev_path = \200 os.path.join(sysfs_mnt + SYSFS_SCSI_PATH, pHCTL)201 scsi_revision = \202 os.popen('cat ' + sysfs_scsi_dev_path + \203 SYSFS_SCSI_DEV_REVISION_PATH).read()204 return scsi_revision.splitlines()[0]205 except:206 return None207def get_scsi_scsilevel(pHCTL):208 try:209 sysfs_mnt = utils.find_sysfs_mount() 210 sysfs_scsi_dev_path = \211 os.path.join(sysfs_mnt + SYSFS_SCSI_PATH, pHCTL)212 scsi_scsilevel = \213 os.popen('cat ' + sysfs_scsi_dev_path + \214 SYSFS_SCSI_DEV_SCSILEVEL_PATH).read()215 return int(scsi_scsilevel.splitlines()[0])216 except:217 return None218def _make_scsi_record(scsi_info):219 scsi_rec = {220 'physical_HCTL': scsi_info[0],221 'dev_name': None,222 'sg_name': scsi_info[2],223 'scsi_id': None224 }225 if scsi_info[1] is not None:226 scsi_rec['dev_name'] = scsi_info[1] 227 if scsi_info[3] is not None:228 scsi_rec['scsi_id'] = scsi_info[3] 229 scsi_rec['vendor_name'] = \230 get_scsi_vendor(scsi_rec['physical_HCTL'])231 scsi_rec['model'] = \232 get_scsi_model(scsi_rec['physical_HCTL'])233 scsi_rec['type_id'] = \234 get_scsi_typeid(scsi_rec['physical_HCTL'])235 scsi_rec['revision'] = \236 get_scsi_revision(scsi_rec['physical_HCTL'])237 scsi_rec['scsi_level'] = \238 get_scsi_scsilevel(scsi_rec['physical_HCTL'])239 try:240 lsscsi_info = os.popen('lsscsi %s 2>/dev/null' % scsi_rec['physical_HCTL']).read().split()241 scsi_rec['type'] = lsscsi_info[1]242 except:243 scsi_rec['type'] = None244 return scsi_rec245def get_scsi_device(pHCTL):246 scsis_info = _vscsi_get_scsidevices_by_lsscsi(pHCTL)247 if not scsis_info:248 scsis_info = _vscsi_get_scsidevices_by_sysfs()249 for scsi_info in scsis_info:250 if scsi_info[0] == pHCTL:251 return _make_scsi_record(scsi_info)252 return None253def get_all_scsi_devices(mask="*"):254 scsi_records = []255 for scsi_info in vscsi_get_scsidevices(mask):256 scsi_record = _make_scsi_record(scsi_info)257 scsi_records.append(scsi_record)... Full Screen Full Screen Automation Testing Tutorials Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc. LambdaTest Learning Hubs: YouTube You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts. Run Airtest automation tests on LambdaTest cloud grid Perform automation testing on 3000+ real desktop and mobile devices online. Try LambdaTest Now !! Get 100 minutes of automation test minutes FREE!! Next-Gen App & Browser Testing Cloud Was this article helpful? Helpful NotHelpful
__label__pos
0.922546
Skip to content python bindings for libsass Python Shell Fetching latest commit… Cannot retrieve the latest commit at this time. Failed to load latest commit information. examples src .gitignore MANIFEST.in setup.py script for uploading to pypi Jul 30, 2012 README.rst setup.cfg setup.py simple_test.sh README.rst SassPython - bindings for libsass why? who? marianoguerra how? first of all download, compile and install libsass: git clone https://github.com/hcatlin/libsass.git cd libsass ./configure make sudo make install then you can play with this project in two ways command line if no options provided read from stdin: ➜ src ./sass.py table.hl td.ln { text-align: right; } table.hl td.ln { text-align: right; } from a file: ➜ src ./sass.py -f ../examples/simple.scss .content-navigation { border-color: #3bbfce; color: darken(#3bbfce, 9%); } .border { padding: 8px; margin: 8px; border-color: #3bbfce; } from a folder: http://chzscience.files.wordpress.com/2011/11/funny-science-news-experiments-memes-dog-science-fuzzy-logic.jpg # I think it doesn't work, never used sass before and don't know what # this means :) ➜ src ./sass.py -d ../examples/ you can't chew gum and walk at the same time: ➜ src ./sass.py -f ../examples/simple.scss -d ~ usage: sass.py [-h] [-f FILE_PATH | -d DIR_PATH] sass.py: error: argument -d/--dir: not allowed with argument -f/--file code from a string: Python 2.7.3 (default, Apr 20 2012, 22:44:07) >>> import sass >>> STYLE = """ ... table.hl td.ln { ... text-align: right; ... } ... """ >>> ok, style = sass.compile(STYLE) >>> ok True >>> print style table.hl td.ln { text-align: right; } from a file: >>> ok, style = sass.compile_path("../examples/simple.scss") >>> ok True >>> print style .content-navigation { border-color: #3bbfce; color: darken(#3bbfce, 9%); } .border { padding: 8px; margin: 8px; border-color: #3bbfce; } from a folder: >>> ok, style = sass.compile_folder("../examples/") # ??? # Profit! how to install? from sources python 2: sudo python2 setup.py install python 3: sudo python3 setup.py install using pip sudo pip install SassPython license? MIT + optional beer for the creator what's left to do? • make the folder stuff work • add command line options to specify option styles • see what the return value of the compile_* means and use it if needed Something went wrong with that request. Please try again.
__label__pos
0.94615
Published on 6 Ideas To Help You Build A Simple server status table With Tailwind CSS Like A Pro Tags Simple server status table What is Tailwind CSS? Tailwind CSS is a utility-first CSS framework that focuses on creating personalized user interfaces quickly. It can gives you all the building blocks you are able to create personalized designs without having to fight to override irritating opinionated styles. Also, Tailwind CSS is a highly configurable, low-level CSS framework. The description of Simple server status table ui component A simple server status table Why use Tailwind CSS to make a Simple server status table ui component? • It can make the building process of Simple server status table ui component faster and more easily. • Enables building complex responsive layouts and components freely. • Minimum lines of CSS code in Simple server status table component file. The preview of Simple server status table ui component Free download of the Simple server status table's source code The source code of Simple server status table ui component <table class="w-full"> <tr class="w-1/2 bg-indigo-300"> <th class="p-2 w-2/3">Server Name</th> <th class="p-2 text-center">Status</th> <th class="p-2 text-center pr-4">Country</th> <th class="p-2 text-center pr-4">Connected</th> </tr> <tr class=""> <td class="p-2">Server 1</td> <td class="p-2 text-center text-green-600">UP</td> <td class="p-2 pr-4 flex justify-between"><span class="incident text-red-500"></span><span class="text-center">Germany</span><span class="pl-3">🇩🇪</span></td> <td class="text-center"> <input class="my-auto" checked type="checkbox"/> </td> </tr> <tr class="bg-gray-200"> <td class="p-2">Server 2</td> <td class="p-2 text-center text-green-600">UP</td> <td class="p-2 pr-4 flex justify-between"><span class="incident text-red-500"></span><span class="text-center">US</span><span class="pl-3">🇺🇸</span></td> <td class="text-center"> <input class="my-auto" checked type="checkbox"/> </td> </tr> <tr class=""> <td class="p-2">Server 3</td> <td class="p-2 text-center text-red-600">DOWN</td> <td class="p-2 pr-4 flex justify-between"><span class="incident text-red-500"></span><span class="text-center">France</span><span class="pl-3">🇫🇷</span></td> <td class="text-center"> <input class="my-auto" type="checkbox"/> </td> </tr> <tr class="bg-gray-200"> <td class="p-2">Server 1 (Mirror)</td> <td class="p-2 text-center text-green-600">UP</td> <td class="p-2 pr-4 flex justify-between"><span class="incident text-red-500">!</span><span class="text-center">Germany</span><span class="pl-3">🇩🇪</span></td> <td class="text-center"> <input class="my-auto" type="checkbox"/> </td> </tr> </table> How to make a Simple server status table with Tailwind CSS? Install tailwind css of verion 3.0.18 Use the script html tag to import the script of Tailwind CSS of the version 3.0.18 <script src="https://cdn.tailwindcss.com"></script> All the unility class needed to make a Simple server status table component • w-full • w-1/2 • bg-indigo-300 • p-2 • w-2/3 • text-center • pr-4 • text-green-600 • flex • text-red-500 • pl-3 • my-auto • bg-gray-200 • text-red-600 14 steps to make a Simple server status table component with Tailwind CSS 1. Use w-full to set an element to a 100% based width. 2. Use w-1/2 to set an element to a fixed width(1/2). 3. Control the background color of an element to indigo-300 using the bg-indigo-300 utilities. 4. Control the padding on all sides of an element to 0.5rem using the p-2 utilities. 5. Use w-2/3 to set an element to a fixed width(2/3). 6. Control the text color of an element to center using the text-center utilities. 7. Control the padding on right side of an element to 1rem using the pr-4 utilities. 8. Control the text color of an element to green-600 using the text-green-600 utilities. 9. Use flex to create a block-level flex container. 10. Control the text color of an element to red-500 using the text-red-500 utilities. 11. Set the left padding of an element to 0.75rem using the pl-3 utilities class 12. Control the vertical margin of an element to auto using the my-auto utilities. 13. Control the background color of an element to gray-200 using the bg-gray-200 utilities. 14. Control the text color of an element to red-600 using the text-red-600 utilities. Conclusion The above is a step-by-step tutorial on how to use Tailwind CSS to make a Simple server status table components, learn and follow along to implement your own components.
__label__pos
0.984828
What is the way to send or receive image in e-mail ? Asked By 0 points N/A Posted on - qa-featured I want to know how to send or receive a picture in e-mail. SHARE Best Answer by Sharath Reddy Answered By 0 points N/A #97739 What is the way to send or receive image in e-mail ? qa-featured there are some general tips recommendation for users who want send or receive a image or picture by e-mail. You can send one or more picture or image in a single e-mail. When we want to send an image it is need to resize. Because when we scene or take a picture by camera it should be big in size and resolution. You must resize the picture to a smaller size. A big size picture or image is not easier to down load and up load during the time when you send the email. You must convert the picture or image in JPEG format. Because most of the computers are capable to JPEG files. Thank you. Best Answer Best Answer Answered By 569030 points N/A #97741 What is the way to send or receive image in e-mail ? qa-featured   You can send or receive an image or a picture through email in the form of an attachment. If you are using a webmail like I do such as Yahoo!, Google Mail, or FastMail, here’s how you can send an image by attaching it to the message. 1. Open and login to your webmail account then hit Compose to start creating your email message. Fill out the appropriate fields for the email message like From, To, Subject, and of course the body or the message of the email. See the screenshot below. 2. Next, find and click on the Attach File button then browse for the picture you want to send then click Open to start attaching the file. 3. When the file has been attached, click on the Send button to send the email message. To test it, enter your own email address on the To field to see how it looks like after receiving the email. Login/Register to Answer Related Questions
__label__pos
0.78075
EEM Authentication slowness with CA Process Automation. book Article ID: 76860 calendar_today Updated On: Products CA Process Automation Base Issue/Introduction Question: It takes a long time to log into PAM.  How can we speed up the login time into PAM? Environment Release: Component: ITPAM Resolution Answer: This assumes EEM 12.51.  On the EEM server 1. Navigate to server.xml under C:\Program Files\CA\SC\EmbeddedEntitlementsManager\config\server 2. Backup your server.xml into a different directory outside of the EEM directory and rename it. 3. Edit and server.xml in a text editor. 4. Locate and modify <paged> to true <paged>true</paged> 5. Save the server.xml 6.  Set to "Do not resolve groups":      a. Log into EEM as the eiamadmin user, navigate to the Configure tab, select User Store along the top, then Group Configuration along the left navigation column.      b. Set the Global Group Configuration drop down to 'Do Not Resolve Groups'     NOTE:  Leave Application resolution level to Resolve nested groups.      c. Save the configuration and log out of EEM.   7.  Under C:\Program Files\CA\SC\iTechnology please locate and open with a text editor such as notepad++ the igateway.conf file. If you need to make a copy of this file, please move the copy out of the \SC\ folder structure to a backup folder. Search for the term asynchronous There should be 2 occurrences of this term and they will appear as <implementation>asynchronous</implementation> Remove the "a" from the term so that both occurrences of this now read as: <implementation>synchronous</implementation>   7. Restart EEM   Attachments
__label__pos
0.92127
14 This is a follow-up question concerning an experimental chapter layout I'm trying to implement for a novel I'm writing. So. My initial problem of finding a way to modify the innards of the parcolumns package so that it alternates the order of the columns with each page break (producing a layout with outer/inner columns instead of left/right ones) has been solved, with many thanks to alexurba. However: in implementing the working modification of the \pc@placeboxes macro, whenever the page breaks switch from odd to even, the placement of the minipages inside wrapfigure environments intended to be placed along the inner margin of the outer column cause problems for the inner column. I'll let the code and images below speak for themselves: \documentclass[11pt,letterpage,twosides]{bookest} % \geometry{textheight=9in,vmarginratio=1:1,outermargin=1.5in,innermargin=.5in,% marginparwidth=1.35in,marginparsep=.1in} \usepackage{parcolumns} \usepackage{marginnote} \usepackage{wrapfig} \usepackage{lipsum} \usepackage{changepage} \strictpagecheck % \newcommand{\warfe}{\begin{wrapfigure}{o}[.15in]{1.5in} \begin{minipage}[t]{1.4in} % \noindent \Large} \newcommand{\warfo}{\begin{wrapfigure}{i}{1.5in} \begin{minipage}[t]{1.45in} % \noindent \Large} \newcommand{\wraf}{\end{minipage} \vspace{-.28in} \end{wrapfigure}} \newcommand{\stm}{\marginnote} % \makeatletter % \def\pc@placeboxes{% \global\let\@tempa\relax% \hb@xt@\linewidth{% \vfuzz30ex % \vbadness\@M% \splittopskip\z@skip% \checkoddpage\ifoddpage \count@\z@% \loop\ifnum\count@<\pc@columncount% \advance\count@\@ne% \my@placeboxes@body% \repeat% \else \count@\pc@columncount% \loop\ifnum\count@>\z@% \my@placeboxes@body% \advance\count@\m@ne% \repeat% \fi }% \@tempa% } % \def\my@placeboxes@body{% \expandafter\ifvoid\csname pc@column@\number\count@\endcsname% \hskip\csname pc@column@width@\number\count@\endcsname% \else% \expandafter\setbox\expandafter\@tempboxa% \expandafter\vsplit\csname pc@column@\number\count@\endcsname% to \dp\strutbox% \vbox{\unvbox\@tempboxa}% \fi% \expandafter\ifvoid\csname pc@column@\number\count@\endcsname% \else% \global\let\@tempa\pc@placeboxes% \fi% \ifnum\count@>\z@% \strut% \hfill% \ifpc@rulebetween% \vrule% \hfill% \fi% \fi% } % \makeatother % \begin{document} % %\chapter{Sample Experimental Chapter Layout} %[Code and JPEGs ommitted.] % \chapter{My Wrapfigure-Minipage Woes} % \begin{parcolumns} % [sloppy=true, % sloppyspaces=true, % nofirstindent=true, % colwidths={1=3.5in, % 2=2.8in}]{2} % \colchunk[1]{\Large \indent \stm{\normalsize \textbf{Sample margin note, always to be on % the outer margin. \\ ---------- \\ The typesetting of this page is entirely correct, with % no wrapfigure/minipage environments flying around as in the following page...}} \lipsum[10]% \lipsum[11] \lipsum[12] \stm{\textbf{\normalsize As one can see quite clearly, the % typesetting of this page with switched columns is troubled, to put it lightly. \\ % ---------- \\ While the wrapfigure-minipage boxes themselves are placed correctly, another% phantom empty box (of the same width) is inserted, flush with the edge of the page at the % inner margin, pushing the text of the larger column over into the box set in the smaller % column. \\ ---------- \\ What gives, friends?\\}} \lipsum[13] \lipsum[14]} % \colchunk[2]{\small \indent \warfo{Sample wrapped text box, using a minipage inside a % wrapfig environment.} \wraf \lipsum[14] \warfo{Note: no problems on the odd pages...} \wraf% \lipsum[15] \warfe{Another sample wrapped text box, placed along the inner side of the \\ % column.} \wraf \lipsum[16] \warfe{Yet another text box.}\wraf \lipsum[17]} % \end{parcolumns} % \end{document} Example, p.1 Example, p.2 I'll be taking a closer look at the implementation sections of the package documentation for wrapfig and parcolumns, reading against the modification of \pc@placeboxes to see if I can make any sense of what exactly might be going on here. I'll update this if I get anywhere near the realm of progress. UPDATE: In tinkering around with my code, I've discovered that if I use the {i} placement specifier for the wrapfigures on the even pages instead of {o}, this problem vanishes, and all is well except that the wrapfigures are on the wrong side of the column on the even pages. Thus, this problem seems to be specific to the wrapfigure not having any wrapped text between it and the adjacent column (or something to that effect, relating to having nothing but the space between columns separating the wrapfigures from the inner column). Given this development, I'm now wondering about these two possibilities: a) Might be possible to issue a series of commands to move the problemed wrapfigures to the right and have the adjacent wrapping text in the column move to the left after the initial typesetting run in which parcolumns first places the columns and boxes on the page? b) Looking at the documentation for parcolumns has reminded me that there's plenty that the package does after the \pc@placeboxes process (i.e., \pc@alloccolumns, \pc@setcolumnwidths, \pc@setsinglecolwidth, and \pc@setcolumnwidth, as described on pp. 11-13 of the parcolumns documentation). In the (likely) event that the solution wouldn't necessarily have anything to do with setting the wrapfigures on the wrong side of the column (on even pages) and then flipping the positions of the text wrapping around the wrapfigures with that of the wrapfigure-minipage boxes themselves, perhaps the answer lies within the code for one of these post-\pc@placeboxes macros? Also (for reference, if anyone needs it): alexurba has updated the answer to my original question with a more detailed explanation of how the \pc@placeboxes modification works, so check that out as well if you like. 8 +50 Although there was indeed a bug in my patch to the parcolumns package in the thread about alternating columns, it turned out that the wrapfigure problem here is an incompatibility between the parcolumns and wrapfig packages. You can see that from the following example \documentclass{article} \usepackage{parcolumns} \usepackage{wrapfig} \usepackage{lipsum} \newcommand*{\dummypic}[1][dummy picture]{% \setlength{\fboxsep}{0pt}% \fbox{\begin{minipage}[t][2cm][c]{2cm} \centering #1 \end{minipage}}% } \begin{document} \begin{parcolumns}{2} \colchunk[1]{% \begin{wrapfigure}{l}{22mm} \dummypic[\emph{left}] \end{wrapfigure}\par \lipsum[4]\par \begin{wrapfigure}{r}{22mm} \dummypic[\emph{right}] \end{wrapfigure}\par \lipsum[5] } \colchunk[2]{% \begin{wrapfigure}{l}{22mm} \dummypic[\emph{left}] \end{wrapfigure}\par \lipsum[7]\par \begin{wrapfigure}{r}{22mm} \dummypic[\emph{right}] \end{wrapfigure}\par \unskip\lipsum[8] } \end{parcolumns} \end{document} which compiles to: wrapfig problem The reason is that parcolumns inserts an \hfill between the two columns. wrapfigure, on the other hand, works by changing the width of the paragraph next to the figure. The \hfill will always shift the narrowed paragraph to the page margin. The problem can be fixed by restricting the columns to their natural width, for example using an \hbox to <column width> {...}. I have implemented this into the small patch package parcolsx provided in the answer to the question about alternating columns. I do not know if that is the most elegant way to solve the problem, but it works. enter image description here I will for now not re-post the code here. If that is desired I (or someone else) can do it later. 5 • Excellent!! Functionality trumps elegance as far as I'm concerned--I can finally start working on filling up those colchunks with material from my typewritten novel draft now. Seriously, thank you so very much for all your work on this! I'll update when I can get a chance to test it. – Isaac Jul 26 '12 at 1:50 • Ok, I've implemented your new package in my initial .tex file for the example pages, and all is well except for the separation between the columns on even pages being about 1/3rd that of the separation on the odd pages. What could be tweaked in your parcolsx package to widen the separation on the even pages? This is a minor and trivial issue which pales in comparison to the major problems you've already tackled, though. (Preemptively: Don't apologize!) Again, many thanks for all the work you've put in on this. – Isaac Jul 26 '12 at 3:29 • 1 @Isaac, that sounds like another bug with the separator. I am sure it is easily fixed, but that will have to wait until after working hours. – alexurba Jul 26 '12 at 7:15 • 1 @Isaac - I just had a quick look and it seems that just two lines were interchanged (the order of a counter increment and the placing of the column separator). I just updated the code of the package in the other thread. Please carefully check everything. Maybe you could do me the favor and check also features of the parcolumns package that you do not currently use. If everything works correctly I will think about sending the patch to the parcolumns author. – alexurba Jul 26 '12 at 8:29 • Okay, according to the features described in the documentation for parcolumns, the only ones I wasn't using were the optional distance=[x] parameter for adjusting the distance separating the columns (2em, unless otherwise specified), as well as rulebetween=true. I adjusted the column widths to allow for various increases of the distance separating the columns with rulebetween=true, and everything's golden. Well done, sir. My most sincere thanks and gratitude for all your help. Send Mr. J. Sauer my regards! ;-) – Isaac Jul 26 '12 at 22:03 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.97577
2009年8月1日土曜日 COLLADA DOMのViewer用のあれこれをCocoaで利用できないか? あっちこっちのサンプルを引っ付け合わせてやってみた. 参考: 前回エントリー 1.とりあえずこんな感じのライブラリ周り. Frameworksのツリー Other Frameworks下のライブラリはcollada-domに入ってたり作ったりしたヤツ. Cg.frameworkは前回エントリーでインストールしたヤツで/Library/Frameworksの下. libz, libxml2は/usr/lib下. LinkedとOtherの使い分け知らない.Linkedが静的でOhterが動的? 2.とりあえずこんな感じのコード. MyView.h #import <Cocoa/Cocoa.h> #import <OpenGL/CGLRenderers.h> #import <OpenGL/gl.h> #import <OpenGL/glu.h> #import "Crt/CrtRender.h" @interface MyView : NSOpenGLView { NSTimer *timer; } @end MyView.mm (拡張子はmm!) #import "MyView.h" @implementation MyView CrtRender _CrtRender; - (void) heartbeat { if(![[NSApplication sharedApplication] isHidden]){ [self setNeedsDisplay:YES]; } } - (id) initWithFrame: (NSRect) frame { NSOpenGLPixelFormatAttribute attribs [] = { NSOpenGLPFADoubleBuffer, NSOpenGLPFADepthSize, 24, NSOpenGLPFAStencilSize, 8, 0 }; NSOpenGLPixelFormat *fmt; { fmt = [[NSOpenGLPixelFormat alloc] initWithAttributes: attribs]; NSOpenGLContext *preflight = [[NSOpenGLContext alloc] initWithFormat:fmt shareContext:nil]; [preflight makeCurrentContext]; [fmt release]; const GLubyte* extensions = glGetString(GL_EXTENSIONS); if ((GL_FALSE == gluCheckExtension((GLubyte *)"GL_ARB_shader_objects", extensions)) || (GL_FALSE == gluCheckExtension((GLubyte *)"GL_ARB_shading_language_100", extensions)) || (GL_FALSE == gluCheckExtension((GLubyte *)"GL_ARB_vertex_shader", extensions)) || (GL_FALSE == gluCheckExtension((GLubyte *)"GL_ARB_fragment_shader", extensions))) { attribs [3] = NSOpenGLPFARendererID; attribs [4] = kCGLRendererGenericFloatID; } [preflight release]; } fmt = [[NSOpenGLPixelFormat alloc] initWithAttributes: attribs]; [super initWithFrame: frame pixelFormat: fmt]; [[super openGLContext] makeCurrentContext]; [fmt release]; return self; } - (void) dealloc { if (timer) { [timer invalidate]; [timer release]; } [super dealloc]; } - (void) awakeFromNib { NSRect r= [self bounds]; GLsizei width= ((int) r.size.width); GLsizei height= ((int) r.size.height); _CrtRender.SetScreenWidth(width); _CrtRender.SetScreenHeight(height); _CrtRender.Init(); //_CrtRender.SetUsingVBOs( CrtTrue ); _CrtRender.SetUsingNormalMaps( CrtTrue ); _CrtRender.Load( "/Users/work/Documents/dominos.dae" ); _CrtRender.SetAnimationOn( CrtTrue ); timer = [NSTimer scheduledTimerWithTimeInterval: (1.0f/60.0f) target: self selector: @selector(heartbeat) userInfo: nil repeats: YES]; [timer retain]; [[NSRunLoop currentRunLoop] addTimer: timer forMode: NSDefaultRunLoopMode]; [[NSRunLoop currentRunLoop] addTimer: timer forMode: NSEventTrackingRunLoopMode]; glEnable(GL_TEXTURE_2D); glShadeModel(GL_SMOOTH); glClearColor(0.9f, 0.9f, 0.9f, 1.0f); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glEnable(GL_LIGHT0); glEnable(GL_LIGHTING); glEnable( GL_CULL_FACE ); glCullFace( GL_BACK ); } - (void) drawRect: (NSRect) rect { GLfloat width= rect.size.width; GLfloat height= rect.size.height; if (height == 0) { height= 1; } glViewport(0, 0, width, height); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0f, (GLfloat)width/(GLfloat)height, 0.1f, 100.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); _CrtRender.SetScreenWidth(width); _CrtRender.SetScreenHeight(height); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); if(_CrtRender.GetScene()) { _CrtRender.Render(); } [[self openGLContext] flushBuffer]; } @end 3.とりあえずこんな感じの'ヘッダー検索'の設定. (collada-domが/Users/work/Documents/で展開されたという前提) 4.とりあえずこんな感じのIB. CustomViewを張ってMyViewに指定. Autosizingも全部チェックしといて拡大縮小に対応しとくとよいかもw. 5.とりあえずこんな感じの結果. videoオリジナルのViewrでの動画はこちら 0 件のコメント:
__label__pos
0.931178
What's next? To vote on something, you can usually can vote up and down. "Liking" in Facebook is like voting up, and there is no down. In Reddit however, we can vote things up and down. This is a bit tricky! A few things make it quite tricky. • When you submit a form with HTML the whole page refreshes, but we don't want that. • We will only want people to vote once either way • People generally want to be able to change their votes. Let's start with how to vote up, and then see how we can vote down too. 1. Create a post 2. Show all posts 3. Show one post 4. Comment on posts 5. Create subreddits 6. Sign up and Login 7. Associate posts and comments with their author 8. Make comments on comments 9. Vote a post up 1. Make vote form 2. Add jQuery AJAX scripts 3. Write vote up and vote down routes 4. Add new attributes to Post model 5. Update DOM with response 6. Restrict to 1 vote per user 7. Let people 'undo' their vote 10. Sort posts by # of votes Voting Plan Let's make a plan by doing what we always do and look very carefully at what the user will be able to see and do with our app. Users should be able to click on up and down arrows to vote a post up or down. The page should not refresh and there should be some indication that you voted up. Once you've voted up or down once, you can't vote up or down again. Ideally, a user could reverse their vote down by voting up to get back to no votes, and then vote up or vise versa, but let's leave this off to the end. Submitting a Form Through AJAX We start by adding a vote up and vote down forms to our post. But this form, remember, needs to be submitted without the default <form> behavior of refreshing the page. For that we're going to need to use jQuery and specifically AJAX requests to handle them without refreshing. Classes (Not Ids) - We'll add two forms, tag them with the class vote-up and vote-down. We'll use these classes as our selectors to fire off the AJAX request to vote up or down. We use classes here because id's must be unique, but classes can repeat in one html template. In this case, there will be many posts on the posts#index page, so we must use a class instead of an id. Data Attribute - Each of these posts is unique even if there are many on the page. How do we know which one we are voting on? How do we communicate this to the server. In order to solve this problem, we'll be using the data-id attribute and render the post's _id attribute in each form. Then we can pull that id into the /posts/:id/vote-up or /posts/:id/vote-down path when we submit the form. <li class="list-group-item"> <div class="lead">{{post.title}}</div> <a href="{{post.url}}" target="_blank">{{post.url}}<a> <div class="text-right"> <a href="/n/{{post.subreddit}}">{{post.subreddit}}<a> </div> <form class="vote-up" data-id="{{post._id}}"> <button type="submit">Vote Up</button> </form> | <form class="vote-down" data-id="{{post._id}}"> <button type="submit">Vote Down</button> </form> </li> Adding the AJAX We're going to add an AJAX request in a new file public/js/posts.js (I know another posts.js file! Be careful as you are navigating your files.). We already included jQuery into this project when we added Bootstrap, so all our jQuery functions should work splendidly. But we need to add this new script to our <head> tag in our layouts/main.handlebars <head> ... <script rel="script" src="/js/posts.js"></script> </head> $(document).ready(function() { $('.vote-up').submit(function (e) { e.preventDefault(); var postId = $(this).data('id'); $.ajax({ type: 'PUT', url: 'posts/' + postId + '/vote-up', success: function(data) { console.log("voted up!"); }, error: function(err) { console.log(err.messsage); } }); }); $('.vote-down').submit(function (e) { e.preventDefault(); var postId = $(this).data('id'); $.ajax({ type: 'PUT', url: 'posts/' + postId + '/vote-down', success: function(data) { console.log("voted down!"); }, error: function(err) { console.log(err.messsage); } }); }); }); Now if we click vote up or down, check the Network tab of your Developer Tools and watch each request flying out to your server. Click on one and see what error is coming back. You can even preview the response. Route not found! Great! Writing the Vote-Up/Down Routes Now that we are submitting to the routes, we have to write them. Let's put them in the top of the controllers/posts.js file. Reminder: we're using PUT because we are editing an existing resource. We want to track who voted on what, and we want to know what the total score of votes is. So we can pretend the Post model has three new attributes: upVotes, downVotes, and voteScore. app.put('posts/:id/vote-up', function (req, res) { Post.findById(req.params.id).exec(function (err, post) { post.upVotes.push(req.user._id) post.voteScore = post.voteTotal + 1 post.save(); res.status(200); }) }) app.put('posts/:id/vote-down', function (req, res) { Post.findById(req.params.id).exec(function (err, post) { post.downVotes.push(req.user._id) post.voteScore = post.voteTotal - 1 post.save(); res.status(200); }) }) Feedback If you have feedback on this tutorial or find any mistakes, please open issues on the GitHub Repository or comment below. Summer academy An iOS Development Summer Course Design, code and launch your own app. Locations in San Francisco and Asia Find your location Product College A computer science college Graduate into a successful career as a founder or software engineer. Learn more Cookies on Make School's website We have placed cookies on your device to ensure that we give you the best experience on our website. This site uses cookies to deliver our services. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Your use of Make School’s Products and Services is subject to these policies and terms. Please note that Make School no longer supports Internet Explorer We recommend upgrading to a modern web browser. Learn more
__label__pos
0.55232
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. For example I have, my $str = '\t'; print "My String is ".$str; I want the output to interpret the tab character and return something like: "My String is \t" I am actually getting the value of the string from the database, and it returns it as a single quoted string. share|improve this question 3   single or double quoted is what happens in source code. Databases do not return source code, so it is not possible to for a database to "return it a a single quoted string". What does your database return? A two-character string consisting of a backslash and a "t"? Or does it return a one-character string consisting of a tab character? –  tadmc Apr 5 '11 at 2:20 4 Answers 4 Well, I just tried below workaround it worked. Please have a look my $str1 = "1234\n\t5678"; print $str1; #it prints #1234 # 5678 $str1 =~ s/\t/\\t/g; $str1 =~ s/\n/\\n/g; print $str1; #it prints exactly the same #1234\n\t5678 share|improve this answer You can follow the technique in perlfaq4's answer to How can I expand variables in text strings?: If you can avoid it, don't, or if you can use a templating system, such as Text::Template or Template Toolkit, do that instead. You might even be able to get the job done with sprintf or printf: my $string = sprintf 'Say hello to %s and %s', $foo, $bar; However, for the one-off simple case where I don't want to pull out a full templating system, I'll use a string that has two Perl scalar variables in it. In this example, I want to expand $foo and $bar to their variable's values: my $foo = 'Fred'; my $bar = 'Barney'; $string = 'Say hello to $foo and $bar'; One way I can do this involves the substitution operator and a double /e flag. The first /e evaluates $1 on the replacement side and turns it into $foo. The second /e starts with $foo and replaces it with its value. $foo, then, turns into 'Fred', and that's finally what's left in the string: $string =~ s/(\$\w+)/$1/eeg; # 'Say hello to Fred and Barney' The /e will also silently ignore violations of strict, replacing undefined variable names with the empty string. Since I'm using the /e flag (twice even!), I have all of the same security problems I have with eval in its string form. If there's something odd in $foo, perhaps something like @{[ system "rm -rf /" ]}, then I could get myself in trouble. To get around the security problem, I could also pull the values from a hash instead of evaluating variable names. Using a single /e, I can check the hash to ensure the value exists, and if it doesn't, I can replace the missing value with a marker, in this case ??? to signal that I missed something: my $string = 'This has $foo and $bar'; my %Replacements = ( foo => 'Fred', ); # $string =~ s/\$(\w+)/$Replacements{$1}/g; $string =~ s/\$(\w+)/ exists $Replacements{$1} ? $Replacements{$1} : '???' /eg; print $string; share|improve this answer      Remember: Perlfaq is your friend (almost as much as SO!). –  Francisco R Apr 5 '11 at 6:55 '\t' and "\t" are string literals, pieces of Perl code that produces strings ("\","t" and the tab character respectively). The database doesn't return Perl code, so describing the problem in terms of single-quoted literals and double-quoted literals makes no sense. You have a string, period. The string is formed of the characters "\" and "t". You want to convert that sequence of characters into the tab character. That's a simple substitution. s/\\t/\t/g I presume you don't want to deal with just \t. You can create a table of the sequences. my %escapes = ( "t" => "\t", "n" => "\n", "\" => "\\", ); my $escapes_pat = join('', map quotemeta, keys(%escapes)); $escapes_pat = qr/[$escapes_pat]/; s/\\($escapes_pat)/$escapes{$1}/g; share|improve this answer String::Interpolate does exactly that $ perl -MString::Interpolate=interpolate -E 'say "My String is [". interpolate(shift) . "]"' '\t' My String is [ ] share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.808473