content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts 3 points · 7 days ago Since the site is served through HTTP, anyone in the middle can change the contents, including javascript. To fix this, you need to block scripts on HTTP by adding this to uBlock Origin's My Filters: |http://*|$script,inline-script 2 It seems that the Bookmarklet URL that DuckDuckGo generates doesn't work. I enabled "Infinite Scroll" and disabled "Keyboard Shortcuts", which generated this URL: https://duckduckgo.com/?kav=1&kk=-1 However, when visting it from a fresh session (no cookies) and checking the settings again, those options are not saved. Is this broken just for me?   Also, when generating these Bookmarklet URLs, shouldn't DuckDuckGo sort all the options in the URL by the order they appear in the settings page? Right now it just appends the last option you changed to the end of the URL, which means Bookmarklets that change the same settings can have multiple URLs, which in theory would make certain requests more unique than others. Not a big deal, but something that can be easily changed too. 2 2 comments 1 point · 7 days ago However, when visting it from a fresh session (no cookies) and checking the settings again, those options are not saved. Is this broken just for me? When you look at the bookmarklet and settings, you're seeing information stored in the non-user identifiable cookie that DDG, so the bookmarklet settings aren't represented there. I tested your bookmarklet and it seems to be working as expected, is that your experience as well? see more Original Poster1 point · 7 days ago the bookmarklet settings aren't represented there Oh, right. The settings on the Bookmarklet are indeed used by DDG, but they're just not displayed as such in settings page, hence my confusion. Thanks!   Maybe DDG should read the settings from the URL and the Cookie, instead of just the Cookie? 4 Hi, I am a bit confused about what the option "Cookies from unvisited websites" actually means when compared to the "All third party cookies" option. Aren't any sites that you didn't visit considered third-party? So what's the difference? 4 2 comments 3 points · 1 month ago The goal of this feature is to block third-party cookies from websites that were not visited directly by the user. This does not block third-party cookies from the website that was not visited earlier after he visits it. https://wiki.mozilla.org/Block_Cookies_From_Unvisited_Sites/TestPlan see more Original Poster1 point · 1 month ago So if I visit website A, and then later visit site B that uses third-party cookies from site A, they will be allowed, because site A is not considered third-party anymore, as I visited it. I think this doesn't work when privacy.firstparty.isolate is set to true. That is why I was not seeing any difference, when testing both options. 2 points · 1 month ago · edited 1 month ago So either trust the ISP or trust Mozilla + the third-parties they decide are trustworthy? I don't think Mozilla should be making this decision for its users. It's not transparent (since most users won't even know), sets a precedent, and has a big potential to be abused, regardless of how much they trust the Resolvers. Doesn't Firefox ask to use other kinds of telemetry? And have options to disable it? Then how does sending every DNS request somewhere else is suddenly acceptable and requires no user interaction? 5 After disabling autoplay with: media.autoplay.allow-muted;false media.autoplay.default;1 media.autoplay.enabled.user-gestures-needed;false it's sometimes difficult to know if an image is just a static image or an actual gif, as it doesn't start playing automatically, for example: https://www.reddit.com/r/gifs/comments/bawlil/watch_and_learn_guys/ Is there any way to make Firefox always show the video controls for gifs? 5 6 comments Original Poster1 point · 1 month ago Thanks, but I don't want to install yet another addon, just for this little thing. Stuff like this should be possible to enable in about:config since it's completely counterintuitive to not know if an image is animated or not, unless you right-click every image. It isn't a GIF, it is a video. This may help: https://i.imgur.com/i5XyAW5.mp4 but the point is that just looking at the image with autoplay off, it's not clear if it's even supposed to play or a static image. Yeah, not clear to me either. Sorry, no idea, and nothing jumps out at me in about:config either. Maybe an extension would help. see more Original Poster1 point · 1 month ago I see, so it's a mp4. I can't find any preference either... Load more comments 19 Any way to disable all javascript not served through https? I think Firefox already stops scripts that are served through http while in a https page, but seems to me that any javascript served in http should be considered insecure? 19 17 comments 4 points · 1 month ago |http://*$script How about inline scripts on not secure pages? see more Original Poster3 points · 1 month ago · edited 1 month ago Thanks! Yes, that would also be desired. I want to disable any JS from HTTP. Would this work?: |http://*$script,inline-script And what is the difference from the one above and this one?: ||http://*$script,inline-script 2 Is there any way to disable autopause (and automute), so that even when you minimize Firefox, or scroll down, or switch tabs, the video keeps playing? 2 6 comments On implementation. see more Original Poster1 point · 1 month ago What are the various implementations? And how to fix this behaviour for each one? 3 points · 1 month ago For example Page Visibility Api - can be disabled by https://addons.mozilla.org/firefox/addon/disable-page-visibility/ Can also be simple event (scroll) checking for player is still on the screen - needs custom scripts to disable events / block scripts. see more Original Poster1 point · 1 month ago That addon works pretty well. Videos don't get paused anymore when minimizing/alt-tabbing/switching tab, but they do when scrolling down/up the page, which I suppose is controlled by other events as you explained. I wonder why an addon is needed to disable the Page Visibility API. Why isn't there any preference in about:config? Would be useful. Load more comments Comment deleted by user1 month ago Original Poster0 points · 1 month ago That doesn't explain how it works exactly, which is what I asked. It sets the Accept-Language header that your browser sends for requests to get resources like HTML pages and makes Firefox represent dates/times in UTC format. see more Original Poster1 point · 1 month ago Thanks. It seems that if the list of languages in use is: English (United States) [en-us] English [en] then this preference doesn't really do anything and the request headers are the same. It is only useful for users that have a non-default languages list. u/dsa478 Karma 169 Cake day December 3, 2018 Trophy Case (1) Verified Email
__label__pos
0.857117
wiki:TypeNats/InductiveDefinitions Version 10 (modified by diatchki, 6 years ago) (diff) -- The module GHC.TypeLits provides two views on values of type TNat, which make it possible to define inductive functions using TNat values. Checking for Zero (Unary Strucutre of Nat) The first view provides the same functionality as the usual Peano arithmetic definition of the natural numbers. It is useful when using TNat to count something. isZero :: TNat n -> IsZero n data IsZero :: Nat -> * where IsZero :: IsZero 0 IsSucc :: !(TNat n) -> IsZero (n + 1) By using isZero we can check if a number is 0 or the successor of another number. The interesting aspect of isZero is that the result is typed: if isZero x returns IsSucc y, then the type checker knows that the type of y is one smaller then x. Checking for Evenness (Binary Structure of Nat) The other view provides a more "bit-oriented" view of the natural numbers, by allowing is to check if the least significant bit of a number is 0 or 1. It is useful when we use TNat values for splitting things in half: isEven :: TNat n -> IsEven n data IsEven a :: Nat -> * where IsEvenZero :: IsEven 0 IsEven :: !(TNat (n + 1)) -> IsEven (2 * (n + 1)) IsOdd :: !(TNat n) -> IsEven ((2 * n) + 1)
__label__pos
0.700078
Internet Cookies – Are They Harmful? Many people ask the question of whether cookies are harmful for their computers. The answer is generally no, but it is not nearly as straightforward as it sounds. While cookies are often stored in text files, only used to track information that is already provided to various websites, and are set to expire in short periods of time, they can be used for malicious purposes in certain instances. You will need to work with cookies – however, if you know when to delete them and when not to, you can still enjoy the conveniences they provide. When Cookies are Good Cookies are used by websites and browsers to store basic information about your Internet use. If you visit a pet store online and commonly look at cat products, the store will install a cookie that saves your preferences and tracks which pages you look at, not only telling the company what their customers are viewing, but giving you targeted results on the front page. Additionally, browsers use cookies to store login information that you may not want to enter repeatedly. If you ever click the “save password” button below a login form, you are installing a cookie on your computer that will automatically insert your password into the fields. For those with multiple logins, this can be convenient and time saving. In addition, bulky information such as IP addresses and browser preferences for a website can be stored in a cookie, saving time when loading a website you often visit. Generally, cookies are used as a shortcut to load information that you have already provided to a website, presenting no security risk on well known, established websites. When Cookies are Bad It is on websites that are not well known and established that cookies may become a concern. Because they can be installed without your knowledge, cookies can be installed by a third party from a website that is less than desirable. If you enter login information on a website that installs a cookie on your computer for malicious purposes, that individual can then take the information stored in that cookie and try to use it to steal other pieces of your information. This is easily solved in most cases by setting a higher security level for cookies, requiring that the browser ask you before saving any cookies to your browser or hard drive. Another possible way in which cookies can negatively affect your computer is when they are stored on your hard drive for too long. Generally, the information there is safe because it is non-executable text files. However, if malware or spyware is installed on your computer, it may be possible for it to access those cookies and start retrieving your login, email, and personal information and sending it back to whomever installed the spyware. Being Safe with Cookies Cookies are necessary to run an Internet browser. They make it possible to visit most websites and can actually speed up and make your browsing experience safer. To feel more at ease and safer with them on your computer, you should set your browser to delete cookies on a more regular basis. Some may save up to 30 or 60 days automatically. With a third party privacy tool or by setting the browser to delete them every day, you can remove them from your hard drive before anyone can access and use your information. In combination with a good spyware removal tool and sensible browsing habits, you can overcome the possible negative effects that cookies might have on you and your PC’s privacy. Source by Jeff Wilson Leave a Reply
__label__pos
0.720361
Neu in Angular 4 - Dynamic Components 19.05.2017 Stefan Welsch Mobile Tech angular handson With the new ngComponentOutlet directive it is now possible to create dynamic components in a declarative way. Before, it was complex and time-consuming to create dynamic components because you had to write a lot of code yourself. Now using the ngComponentOutlet directive just got a lot easier! In the following example, the user should be greeted accordingly. If it is the first visit to the page, a different greeting should be displayed, as for a recurring visit. @Component({ selector: 'app-first-time-visitor', template: '<h1>Welcome to b-nova!</h1>', }) export class FirstTimeVisitorComponent {} @Component({ selector: 'app-repeating-visitor', template: '<h1>Welcome Back to b-nova!</h1>', }) export class RepeatingVisitorComponent {} @Component({ selector: 'app-root', template: ` <h1>Hello b-nova Friend</h1> <ng-container *ngComponentOutlet="welcome"></ng-container> ` }) export class App implements OnInit{ welcome = FirstTimeVisitorComponent; ngOnInit() { if(!this.user.isfirstVisit){ this.welcome = RepeatingVisitorComponent; } } } In the application we define the ngComponentOutlet directive with the welcome reference. This reference is initialized with the FirstTimeVisitorComponent. The ngOnInit then checks whether it is the first or a recurring visit. In the case of a returning visitor, the welcome reference is set to the RepeatingVisitorComponent. The returning visitor receives the message _Welcome Back to b-nova! _ Instead of _Welcome to b-nova! _. This text was automatically translated with our golang markdown translator. Stefan Welsch - pioneer, stuntman, mentor. As the founder of b-nova, Stefan is always looking for new and promising fields of development. He is a pragmatist through and through and therefore prefers to write articles that are as close as possible to real-world scenarios.
__label__pos
0.630334
Engineering Knowledge: Engineering Mathematics Engineering Algebra EK-EM-1 Engineering Performance Matrix Algebra is a branch of mathematics that focuses on the conventions related to the use of letters and other general symbols, known as variables, to represent numbers and quantities, without fixed values, in formulae and equations. Algebra is important to Engineering Literacy as engineering professionals habitually select and use algebraic content and practices in the analysis, design, and making of solutions to engineering problems. For example, the related mathematical applications are used on a daily basis to solve formulas to determine an unknown value using a measured or known value such as the voltage in an electrical circuit using Ohm’s Law. Performance Goal for High School Learners I can, when appropriate, draw upon the knowledge of algebraic content and practices, such as (a) the basic laws of algebraic equations, (b) reasoning with equations and inequalities, (c) representing equations in 2D and 3D coordinate systems, and (d) linear algebra, to solve problems in a manner that is analytical, predictive, repeatable, and practical. BASIC PROFICIENT ADVANCED RECOGNIZING, SELECTING, AND APPLYING APPROPRIATE ALGEBRAIC CONCEPTS & PRACTICES I can distinguish between provided algebraic concepts based on their application. MANIPULATION OF ALGEBRAIC EQUATIONS I can apply a given algebraic concept when provided with direction towards its application. I can select and use the appropriate algebraic concept for the engineering situation at hand. Contact Us Copyright 2020 AEEE Incorporated, All Rights Reserved
__label__pos
0.98338
scatter diagram example problems pdf A scatter chart template is a graph of plotted points that shows the relationship between two sets of data. → “Scatter Diagram is used to study and identify the possible relationship between two variables”. A scatter diagram is one of the seven basic tools of quality, but many professionals find it to be a difficult concept. Example 1: Plotting Scatter Graphs. 4 B. In addition, you have analyzed how the process is behaving using process flow diagrams, histograms and control charts. An ishikawa diagram is a visualization of the causes of a failure grouped into categories. The discrete value or category is placed at the centre of the bar. We want to determine if the speed of the drill (drill speed) used in the process, impacts the smoothness of the final cut finish on a metal bar. Other charts use lines or bars to show data, while a scatter diagram uses dots. Scatter diagram, Karl Pearson’s method (two variables, ungrouped data); Spearman’s Rank Correlation to be explained with the help of numericals. The scatter plot below shows their results with the line of best t. Using the line of best t, which is closest to the number of minutes it would take to complete 9 laps? positive or negative). Scatter Plot Output Example 2 – Scatter Plot with a Regression Line and Prediction Limits This section presents an example of how to generate a simple scatter plot with a regression line and prediction limits. Let us take an example of scattered plots: Given that collected data about exams in each of her classes from the previous year. [3 marks] We will put the Maths mark on the x-axis and the English mark on the y-axis. !The scatter graph shows this information. ... correlation may reveale the causes of a problem. 2. The number of hours spent on math homework Edraw is used as a scatter plot software containing built-in scatter plot symbols and ready-made scatter plot templates that make it easy for anyone to create attractive scatter plots. Download full-text PDF Read full-text. 9 8 7 6 5 4 3 2 123 456 78 Age (years) Price (thousands of £) The shape of the scatter diagram often shows the degree and direction of relationship between two variables, and the correlation may reveale the causes of a problem. Fig. Use her data to make a scatter … (a) Show this information on the scatter graph. Example: Bad CAPA - Preventative action: More frequent inspection Good CAPA will think that inspection only can find the fail, but how to stop the fail and reduce loss? Named for its resemblance to a fishbone, this quality management tool works by defining a quality-related problem on the right-hand side of the diagram, with individual root causes and sub causes branching off to its left. Scatter Plots A scatter plot is a graph with points plotted to show a relationship between two sets of data. Module 6: Unit 3 Data representation59 In a bar graph or bar-line graph the height of the bar or line is proportional to the frequency. We’d like to take this concept a step farther and, actually develop a mathematical model for the relationship between two quantitative variables By testing the data with a scatter diagram, you can determine if you are on the right track or looking at the wrong possible causes And best of all they all (well, most!) 2 Analysis of questionnaire # 1 response rate 18 You just clipped your first slide! (b) Describe the correlation between the value of the car and the age of the car..... (1)!The next car that arrives is 6 years old. This website and its content is subject to our Terms and Conditions. Fishbone diagram, Scatter Diagram, Run Charts, Flow Charts DESCRIPTION The Pareto diagram is named after Vilfredo Pareto, a 19th-century Italian economist who conducted a study in Europe in the early 1900s on wealth and poverty. She then had their body mass index (BMI) measured. The cars have been advertised for sale in a local paper. Scatter Plot Diagram The cause (speed) is on the x-axis. Clipping is a handy way to collect important slides you want to go back to later. Making a scatter plot of a data set: A teacher surveyed her students about the amount of physical activity they get each week. It allows team members to separate a problem's content from its history, and allows for team consensus around the problem and its … Scatter Plots: Properties, Characteristics, and Examples 1. You have defined the problem using Pareto diagrams and pinpointing. Solution: To draw the shear force diagram and bending moment diagram we need R A and R B. He found that wealth was concentrated in the hands of the few and poverty in the hands of the many. Edraw can also convert all these templates into PowerPoint, PDF or … This scatter … They are used to plot the data points on horizontal and vertical axis that shows how much one variable is affected by another. Problem-solving using Venn diagram is a widely used approach in many areas such as statistics, data science, business, set theory, math, logic and etc. This is used for problem analysis, root cause analysis and quality improvement to identify factors that have contributed to a problem. A scatter chart template is known by many names such as scatter plot, scatter diagram and scatter graph. Also called a cause-and-effect or Ishakawa diagram, a fishbone diagram may have multiple sub-causes branching off of each identified category. Problem tree analysis helps stakeholders to establish a realistic overview and awareness of the problem by ing the fundamental causes and their most identify important effects. Problems involved in constructing a Price Index … Categories & Ages. Sketch the shear force and bending moment diagrams and find the position and magnitude of maximum bending moment. The independent variable X is on the horizontal axis, and the dependent variable Y along the vertical axis. 4. Scatter diagrams are very useful in regression modeling (Montgomery, 2009; Oakland, 2003). Scatter Diagrams and Statistical Modeling and Regression • We’ve already seen that the best graphic for illustrating the relation between two quantitative variables is a scatter diagram. Scatter diagrams show the relationship between two variables. !It is 4 years old and worth £5000.! 5 C. 6 D. 7 18. The best way to explain how the Venn diagram works and what its formulas show is to give 2 or 3 circles Venn diagram examples and problems with solutions. We will create a scatter … Scatter Diagrams: Worksheets with Answers Whether you want a homework, some cover work, or a lovely bit of extra practise, this is the place for you. Introduced by Kaoru Ishikawa, the fishbone diagram helps users identify the various factors (or causes) leading to an effect, usually depicted as a problem to be solved. The points which represent the relationship between the two sets of information. Tes Global Ltd is registered in England (Company No 02017289) with its registered office at 26 Red Lion Square London WC1R 4HQ. Created: Dec 23, 2009. !Another car arrives at the garage. Fishbone Diagram. The definition of ishikawa diagram with examples. Problem Tree Analysis – Procedure and Example . A. y = x B. y = 2 3 x + 1 C. y = 3 2 x + 4 D. y = 3 2 x + 1 10. Three interesting examples to introduce the concepts of scatter diagrams and correlation. Scatter Diagrams. 4. (1)!! By looking at the diagram you can see whether there is a link between variables. 2.6.5 Cause and effect diagram (fishbone diagram) 13 2.6.6 Scatter diagram 13 2.6.7 Flowcharts 14 CHAPTER 3 Methodology 15 CHAPTER 4 Adoption and Implementation of TQM in the Construction Industry 18 4. a scatter diagram, as in Section 11.4.2. Bars are to be drawn separated equally, with same width. Read: Scatter Diagram Explained with Example Report a problem. This may be confusing, but it is often easier to understand than lines and bars. Updated: Oct 8, 2013. ppt, 173 KB. Now customize the name of a clipboard to store your clips. A scatter plot can make patterns of responding identifiable and, in turn, suggest environmental features that occasion undesirable behavior. The data used are from the Size dataset. The scatter plot templates are easy to use and free. A fishbone diagram sorts possible causes into various categories that branch off from the original problem. A fishbone diagram helps team members visually diagram a problem or condition's root causes, allowing them to truly diagnose the problem rather than focusing on symptoms. EXAMPLES – CASE STUDY Example problem: Low utilization of dental services by adults5 The following Fishbone Diagram shows how a public health team could Distinguish these causes and sub causes is a useful step to deal with the problem. Oren plants a new vegetable garden each year for 14 years. 9. Lesson 8 – The Scatter Diagram is an important tool to help you determine if there is a relationship between one item and another that could be causing you a defect or problem. A problem is composed of a limited number of causes, which are in turn composed of sub causes. (vi) Index numbers: simple and weighted - meaning, types and purpose. 3. A simply supported beam is subjected to a combination of loads as shown in figure. A. ... Three interesting examples to introduce the concepts of scatter diagrams and correlation. In this blog post, I will explain the scatter diagram. Scatter diagram example. Use the scatter plot to answer the question. Which equation most closely represents the line of best t for the scatter plot below? → It is the best validation tool. If there is a link it is called correlation. The main output of the exercise is a tree-shaped diagram in which In the following example we are investigating and seeking to control the output from a process where there is a high speed drilling machine. come with answers. Figure 1 is an example of a scatter diagram for this case. A scatter plot can indicate the presence or absence of an association or relationship between two variables. A good CAPA can prevent the problem repeating in a same and similar situation, while a bad CAPA only solve the problem for a moment and the problem will come back. A scatter plot provides the most useful way to display bivariate (2-variable) data. 19.4 Shear force and bending moment (iii) Diagram 2 Worked Example 2 The scatter diagram shows the ages, in years, and the selling prices, in thousands of pounds, of second-hand cars of the same model. Using the scatter diagram, the researcher can observe the scatter of points, and decide whether there is a straight line relationship connecting the two variables. Below is a table of 11 student’s scores out of 100 on their Maths and English tests. Plot a scatter graph from this data. 1 Questionnaire # 1 18 4. Read ... Info. Subjected to a problem office at 26 Red Lion Square London WC1R 4HQ a diagram. Table of 11 student ’ s scores out of 100 on scatter diagram example problems pdf Maths and tests... Called a cause-and-effect or Ishakawa diagram, a fishbone diagram sorts possible causes into various categories that branch off the... Advertised for sale in a local paper deal with the problem using Pareto diagrams and.. Section 11.4.2 Analysis of questionnaire # 1 response rate 18 the points which represent the relationship two. Dependent variable Y along the vertical axis high speed drilling machine are and. Moment scatter diagrams and correlation control the output from a process where there is a way... Questionnaire # 1 response rate 18 the points which represent the relationship between two sets of data that... And sub causes is a graph of plotted points that shows how much one variable is by. A combination of loads as shown in figure the few and poverty in the example! Useful step to deal with the problem represents the line of best t for the plot... Are easy to use and free the Maths mark on the x-axis Analysis of questionnaire 1! High speed drilling machine diagram uses dots Pareto diagrams and correlation scatter diagram example problems pdf in a local.. Improvement to identify factors that have contributed to a combination of loads shown! Loads as shown in figure cars have been advertised for sale in a local paper is. And scatter graph are easy to use and free speed ) is on the y-axis easier to than... ; Oakland, 2003 ) 2013. ppt, 173 KB a and R.. Then had their body mass index ( BMI ) measured and its content is to! Red Lion Square London WC1R 4HQ sale in a local paper branching off of each identified category uses dots to. 14 years in figure possible causes into various categories that branch scatter diagram example problems pdf from the original.! To plot the data points on horizontal and vertical axis whether there is a high speed drilling.. 1 response rate 18 the points which represent the relationship between two sets of data Company 02017289. 2-Variable ) data 100 on their Maths and English tests create a scatter chart is. Bars are to be drawn separated equally, with same width London WC1R 4HQ and... X-Axis and the English mark on the scatter graph have defined the problem using Pareto diagrams and pinpointing step deal... The y-axis had their body mass index ( BMI ) measured for problem Analysis, root cause Analysis quality. Of an association or relationship between the two sets of data a data set: a surveyed... A simply supported beam is subjected to a combination of loads as shown in.! Set: a teacher surveyed her students about the amount of physical activity they get each week that branch from. Simply supported beam is subjected to a combination of loads as shown in figure and vertical axis shows. Sketch the shear force and bending moment x-axis and the English mark on the scatter graph diagram may have sub-causes... Hands of the causes of a problem: Given that collected data about exams each... And weighted - meaning, types and purpose identify factors that have contributed to problem... To store your clips moment diagram we need R a and R B separated,! Diagrams are very useful in regression modeling ( Montgomery, 2009 ;,... ( speed ) is on the y-axis two variables a cause-and-effect or Ishakawa,. Have analyzed how the process is behaving using process flow diagrams, histograms and control charts useful to. Used for problem Analysis, root cause Analysis and quality improvement to identify factors that have contributed a! Local paper scatter diagram example problems pdf is on the scatter plot provides the most useful way to display bivariate ( 2-variable ).. Exams in each of her classes from the original problem a fishbone diagram may have multiple sub-causes branching of... In this blog post, I will explain the scatter plot templates are easy to use free! Of an association or relationship between two sets of information and scatter graph and vertical axis that the... Example we are investigating and seeking to control the output from a process where there a., I will explain the scatter graph with its registered office at Red! Questionnaire # 1 response rate 18 the points which represent the relationship the... The x-axis and the English mark on the x-axis and the dependent variable Y along the vertical.... Process is behaving using process flow diagrams, histograms and control charts mark on the x-axis the. Three interesting examples to introduce the concepts of scatter diagrams and pinpointing each week templates. ( a ) show this information on the y-axis diagram and scatter graph to display (. Step to deal with the problem Procedure and example to store your clips ( )... Of information draw the shear force and bending moment very useful in modeling... 11 student ’ s scores out of 100 on their Maths and English tests maximum bending moment scatter diagrams pinpointing! Distinguish these causes and sub causes is a high speed drilling machine points which represent relationship! Tree Analysis – Procedure and example uses dots multiple sub-causes branching off of each identified category plot of a set. Blog post, I will explain the scatter diagram uses dots of scattered Plots:,.: Given that collected data about exams in each of her classes from the original problem bars! England ( Company No 02017289 ) with its registered office at 26 Red Lion Square WC1R... Professionals find it to be a difficult concept a teacher surveyed her students about amount! That wealth was concentrated in the hands of the bar understand than lines and bars simply supported beam is to. Diagrams, histograms and control charts Ishakawa diagram, a fishbone diagram have. Red Lion Square London WC1R 4HQ Maths mark on the horizontal axis, the! Branching off of each identified category magnitude of maximum bending moment diagrams and correlation 18 the points represent! For problem Analysis, root cause Analysis and quality improvement to identify factors have! How much one variable is affected by another basic tools of quality, but many professionals find it be. Points which represent the relationship between the two sets of data from a process there. Diagram, as in Section 11.4.2 between the two sets of data, while a scatter plot below a... Have defined the problem using Pareto diagrams and find the position and magnitude of maximum bending scatter... And quality improvement to identify factors that have contributed to a combination of loads as shown in figure diagrams very. Response rate 18 the points which represent the relationship between two variables Company No 02017289 ) its! Three interesting examples to introduce the concepts of scatter diagrams and find the position and magnitude maximum. Axis that shows the relationship between two variables and worth £5000. on horizontal and vertical axis that the... Names such as scatter plot diagram a scatter chart template is a high speed drilling machine the diagram you see! That wealth was concentrated in the following example we are investigating and to! To our Terms and Conditions sub-causes branching off of each identified category is known by many names as. Variable is affected by another the presence or scatter diagram example problems pdf of an association or relationship between two of. Data set: a teacher surveyed her students about the amount of physical activity they get week... Scatter … problem Tree Analysis – Procedure and example students about the of! Causes and sub causes is a visualization of the few and poverty in the hands of few... Example of a data set: a teacher surveyed her students about the amount physical. A table of 11 student scatter diagram example problems pdf s scores out of 100 on their Maths and English.... Also called a cause-and-effect or Ishakawa diagram, as in Section 11.4.2 using diagrams! In Section 11.4.2 process is behaving using process flow diagrams, histograms and control charts surveyed her students about amount! ( Company No 02017289 ) with its registered office at 26 Red Lion Square WC1R... Sets of data defined the problem her classes from the original problem or … scatter diagram, a diagram! Create a scatter chart template is known by many names such as scatter plot can the! With the problem mass index ( BMI ) measured confusing, but many professionals find it to be difficult... Are to be drawn separated equally, with same width - meaning, types and purpose independent X... Of best t for the scatter diagram axis, and examples 1 distinguish causes! An ishikawa diagram is a handy way to display bivariate ( 2-variable data. Shows the relationship between two variables scatter diagram example problems pdf variable Y along the vertical.... 14 years factors that have contributed to a combination of loads as shown in figure was in... Between the two sets of information useful in regression modeling ( Montgomery, ;... Each of her classes from the previous year and magnitude of maximum moment... Data set: a teacher surveyed her students about the amount of physical activity they each! With its registered office at 26 Red Lion Square London WC1R 4HQ of information of! R a and R B index ( BMI ) measured 11 student ’ s scores out of 100 on Maths! S scores out of 100 on their Maths and English tests the name of a scatter diagram.... Index ( BMI ) measured cause-and-effect or Ishakawa diagram, as in Section 11.4.2 quality, but is! X is on the x-axis ( a ) show this information on x-axis... Is a graph of plotted points that shows how much one variable is affected by another in! 3 Conditions Of Knowledge, How To Connect With Someone On A Deeper Level, Jersey City Haunted Hospital, Best Vocal Mic Under $200, Product Marketing Manager Salary, Wilson Roland Garros Clash 100, Air Fryer Salt And Vinegar Fries, Tying A Snapper Rig, Fastenmaster For Trex, Is The Pursuer Optional, Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.769591
Documentation cgsl_0104: Modeling global shared memory using data stores ID: Titlecgsl_0104: Modeling global shared memory using data stores DescriptionWhen using data store blocks to model shared memory across multiple models: AIn the Configuration Parameters dialog box, on the Diagnostics pane, set Data Validity > Data Store Memory block > Duplicate data store names to error for models in the hierarchy BDefine the data store using a Simulink® Signal or MPT Signal object CDo not use Data Store Memory blocks in the models Notes If multiple Data Store blocks use the same data store name within a model, then Simulink interprets each instance of the data store as having a unique local scope. Use the diagnostic Duplicate data store names to help detect unintended identifier reuse. For models intentionally using local data stores, set the diagnostic to warning. Verify that only intentional data stores are included. Merge blocks, used in conjunction with subsystems operating in a mutually exclusive manor, provide a second method of modeling global data across multiple models. RationaleA, B, CPromotes a modeling pattern where a single consistent data store is used across models and a single global instance is created in the generated code. See Also Last ChangedR2011b Examples The following examples illustrate the use of data stores as global shared memory. The data store is used to model a global fault flag. A data store is required because the flag can be set in multiple functions and used in the same execution step. The top model contains three subsystems, each utilizing a data store memory. The data store is defined using a signal data object. Recommended In this example, there are no Data Store Memory blocks. The resulting code uses the same global variable for the full model. Not Recommended In this example, a Data Store Memory block is added into the Model block subsystem. The model subsystem uses a local version of the data store. The Atomic Subsystem use a different version. Was this topic helpful?
__label__pos
0.884569
Skip to content Array Length vs. Capacity Array capacity is how many items an array can hold, and array length is how many items an array currently has. ggorantala ggorantala 1 min read Table of Contents How long is an array? If someone asks you how long an array is, there could be two possible answers when discussing how long an array is. 1. How many items can an array hold, and 2. How many items currently an array has? The first point is about capacity, and the second is about length. Let us create an array A[10] whose capacity is10, but no items are added. Technically we can say the length is 0. int[] A = new int[10]; Let's insert integers 1, 2, 3, and 4 into the above array. A[0] = 1; A[1] = 2; A[2] = 3; A[3] = 4; At this point, the length/size of the array is 4 and the capacity of the array that has room to store elements is 10. The following code snippets explain the difference between array length vs. capacity. Capacity The capacity of an array in Java can be checked by looking at the value of its length attribute. This is done using the code A.lengthwhere A the Array's name is. public class ArrayCapacityLength { public static void main(String[] args) { int[] A = new int[10]; System.out.println("Array Capacity " + A.length); // 10 } } Running the above snippet gives Array Capacity is 10 Length This is the number of items currently in the A[] array. import java.util.Arrays; public class ArrayCapacityLength { public static void main(String[] args) { int[] A = new int[10]; int currentItemsLength = 0; for (int i = 0; i < 4; i++) { currentItemsLength += 1; A[i] = i + 10; } System.out.println(Arrays.toString(A)); // [10, 11, 12, 13, 0, 0, 0, 0, 0, 0] System.out.println("Array length is " + currentItemsLength); // 4 System.out.println("Array Capacity is " + A.length); // 10 } } Running the above snippet gives Array length is 4 Array Capacity is 10 ArraysData Structures and Algorithms ggorantala Twitter Gopi has a decade of experience with a deep understanding of Java, Microservices, and React. He worked in India & Europe for startups, the EU government, and tech giants. Comments Related Posts Members Public Arrays From Zero To Mastery: A Complete Notes For Programmers This article discusses array data structure with sketches, memory diagrams, array capacity vs. length, strengths & weaknesses, big-O complexities, and more! Members Public Find Even Number Of Digits in an Array This problem tests your knowledge of mathematics. Solving this problem helps you find the place values and how they are represented in the decimal number system. Members Public Array Strengths, Weaknesses, and Big-O Complexities In this lesson, you will learn about array strengths and weaknesses along with the big-o complexities.
__label__pos
0.99582
Using a loop block to gather all results from a REST API Hi Team, Have a possible simple one but my technical knowledge with JS isn't advance enough to figure it out. I have a rest API that allows me to grab 100 records at a time, if there are still records after it, it returns a response code of 206, if there are not further records, it returns 200. See below as per api documentation: In my head, it should be possible to make this API call and have a JSON object in the range value field that the loop goes through and updates based on 206 or 200 response then returning the accumulated results. I'm not sure if this has to happen in the way of First API call -> Loop block repeating the api call OR Just in the one loop block using the code option instead (but then unsure what to reference the loop block too). Would greatly appreciate any help in this idea. I have tried getting through it with chatgpt but my coding knowledge isn't quite enough to get it over the line. See below for an excerpt of what the format of the returned API call. Cheers, Tom Hey @tomrezoando, I think the way the API is structured requires you to make an initial API call to determine whether or not you need to make successive calls. The first query would go into a branch that checks for the status code and when == 206, can be sent along to a code based loop which would be able to determine how many new queries need to be made to gather and return all of the results. Here's a little example of a theoretical type of workflow to use -- I am uncertain of the actual response header locations so I made some assumptions in the code block: There's probably more efficient ways of making the looped queries, but this is more of a proof of concept that I think you are looking for. In this example, you'd still have to join in the data from the results of the initial query (which would return either 100 items + the Content-Range header or just give you all of the items available if there were less than 100, but that is something which can be explored via the Else/Else-If functions branch node as well). Hope this helps. If you have other questions or I misunderstood your intent, let me know! 1 Like
__label__pos
0.72135
24/01/2013 2:45am PHP | Working Example - GridFieldExtraColumns class GridFieldExtraColumns implements GridField_ColumnProvider { protected $field = ''; public function __construct($field) { $this->field = $field; } public function augmentColumns($gridField, &$columns) { array_unshift($columns, $this->field); } public function getColumnAttributes($gridField, $record, $columnName) { return array( 'data-extrafield' => '1', 'data-url' => '/admin/boardcheck', 'data-field' => $this->field, 'data-id' => $record->ID, 'onclick' => "if(event.stopPropagation){event.stopPropagation();}event.cancelBubble=true;var val = prompt('Enter value');var t = jQuery(this);t.html(val);var model = t.parents('tr').data('class');jQuery.post(t.data('url') + '/' + model + '/extrafields/' + t.data('id'),{field:t.data('field'),table:t.data('table'),id:t.data('id'),value:val});return false" ); } public function getColumnContent($gridField, $record, $columnName) { $res = $gridField->getList()->getExtraData($this->field,$record->ID); if($res) { return $res[$this->field]; } } public function getColumnMetadata($gridField, $columnName) { return array('title' => $this->field); } public function getColumnsHandled($gridField) { $form = $gridField->getConfig()->getComponentByType('GridFieldDetailForm'); if($form) { $field = $this->field; //this does nothing :( $form->setItemEditFormCallback(function($form,$components) use ($field) { $form->addFieldToTab('Root.Main',new TextField($field)); }); } return array($this->field); } }
__label__pos
0.999924
To facilitate your preparation for the NS0-004 NetApp Certified Technology Solutions Professional exam, it is highly recommended that you engage in a comprehensive study of the latest NetApp Technology Solutions (NCTS) NS0-004 Exam Questions provided by PassQuestion. By doing so, you will not only enhance your knowledge and skills but also significantly increase your chances of success in the NS0-004 exam. It is of utmost importance to develop a strong grasp of the various exam topics as it forms the foundation for efficient and effective exam preparation. By thoroughly studying NetApp Technology Solutions (NCTS) NS0-004 Exam Questions, you will acquire the necessary knowledge and skills to confidently approach and successfully pass your exam. NetApp Certified Technology Solutions Professional (NS0-004) The NS0-004 exam is now available for employees, partners, and customers to validate their ability to identify NetApp solutions that address business challenges. The exam covers Infrastructure, Data Storage Software, and NetApp Hybrid Multicloud Solutions. Passing the exam earns candidates the NetApp Certified Technology Solutions Professional badge. Prior to taking the exam, it is recommended to have 3-6 months of experience with hybrid cloud technology basics and a general understanding of the NetApp product portfolio. Requirements include a fundamental understanding of hybrid-cloud concepts in the industry, as well as NetApp solutions, concepts, and portfolio. Candidates should be able to leverage technologies to build solutions, identify use cases, utilize documentation, recognize hybrid cloud and networking components, understand basic protocols, and identify security solutions for data protection. Key Topics Covered Exam Topics Subtopics Infrastructure – Identify basic infrastructure concepts and components – Identify data management concepts – Identify virtualization concepts Data Storage Software – Identify components and/or concepts of NetApp SANtricity – Identify components and/or concepts of NetApp ONTAP – Identify components and/or concepts of NetApp StorageGRID – Identify data management tools NetApp Solutions Hybrid Multicloud – Identify NetApp cloud management tools – Identify NetApp data mobility solutions – Identify NetApp hybrid cloud data protection solutions – Identify NetApp storage solutions – Identify NetApp Cloud Services – Identify consumption models Preparation Strategies To Pass The NetApp NS0-004 Exam Preparing for the NS0-004 exam requires a well-rounded study plan that encompasses various strategies to ensure success. In order to excel in this exam, consider implementing the following key ideas: 1. Official Resources: Utilize NetApp’s extensive collection of official documentation and study materials specifically designed to cover all the exam objectives comprehensively. These resources will provide you with the necessary foundation to understand and grasp the concepts and topics required for the exam. 2. Hands-On Practice: Enhance your theoretical knowledge by actively engaging in hands-on practice with NetApp technologies. This practical approach will allow you to apply your learnings in simulated environments, enabling you to gain valuable experience and develop a deeper understanding of the concepts covered in the exam. 3. Practice Exams: Gauge your level of preparedness and familiarize yourself with the exam format by taking advantage of practice exams. These mock exams will simulate the actual exam conditions, allowing you to assess your knowledge, identify areas of improvement, and build confidence in your abilities. View Online NetApp Certified Technology Solutions Professional NS0-004 Free Questions 1. You are using Cisco Unified Computing System (UCS) and Cisco Nexus switches. You need a storage component that validates the FlexPod reference architecture. Which NetApp product will accomplish this task? A. NetApp Cloud Volumes ONTAP B. NetApp AFF C. NetApp StorageGRID D. NetApp ONTAP Select Answer: B 2. Which NetApp platform forms the basis of the StorageGRID appliance? A. AFF B. FAS C. E-Series D. SolidFire Answer: C 3. An IoT company is building a mobile device. The development team wants to move data collected with the mobile devices to a file share hosted on AWS that would be used as the ingestion source for a machine learning pipeline. They have decided to host the file share using Cloud Volumes ONTAP. Which software would you add to the mobile device to facilitate this task? A. StorageGRID B. ONTAP Select C. SANtricity D. Element Answer: B 4. Which workload would benefit from a virtualized environment with the ability to quickly clone entire virtual machines? A. a virtual desktop environment with 5,000 users B. the payroll system of a government body C. online retail sales D. a large data lake Answer: A 5. What are two benefits of managing persistent storage volumes in containers using NetApp Trident? (Choose two.) A. Trident dynamically fulfills storage requests. B. Trident provides a unified interface to NetApp storage. C. Trident provides automated deployment of Helm charts to Kubernetes. D. Trident provides additional storage efficiencies. Answer: A, B 6. You want to restore files to the same NetApp Cloud Volumes ONTAP instance after being backed up in object storage. In this scenario, which NetApp service enables this capability? A. Cloud Backup B. Cloud Compliance C. Spot Ocean D. SaaS Backup Answer: A 7. Which IEEE standard supports VLAN tagging? A. 802.3u B. 802.1q C. 802.3q D. 802.3 Answer: B 8. Which three benefits does Astra provide? (Choose three.) A. application migration B. data insights C. data protection D. disaster recovery e) application management Answer: A, C, D 9. You are the storage administrator for a large number of NetApp systems for your organization. You want to see a list of all of the identified risks for the clusters that you manage. In this scenario, from which two NetApp sources would you locate this information? (Choose two.) A. Active IQ website B. Cloud Sync C. Cloud Insights D. Cloud Compliance Answer: A, C 10. On a NetApp E-Series hybrid flash system, you want to optimize drive rebuild speeds after a failure. In this scenario, which disk configuration would you use to accomplish this task? A. RAID Double Parity (RAID-DP) B. Disk Pools C. RAID Triple Erasure Coding (RAID-TEC) D. Volume Group Answer: B
__label__pos
0.902487
Please note that we are working on a new feature to replace the snapshot feature which is currently deprecated. The announcement will be made soon. Snapshot is a feature which can save the current state of integration step for the future reference. This feature can be used to prevent the repetitive execution of the same integration state. It is a JSON object which can contain any information deemed useful in preserving the current state. Please note: Snapshot has a hard limit of 5Kb in size to prevent its misuse as an intermediate database. If the snapshot exceeds 5Kb limit your integration task will not be scheduled and an Error will be reported: Oops Failed to schedule execution of task "Name of your task". Please use an external database like MongoDB, Redis, MySQL, etc. instead, if this limitation is affecting your workflow. What can snapshot do for your integration? Snapshot feature can help you to: • Avoid the data duplication when the same data is not synced several times which can potentially cause data loss by overwriting records at some stage. • Save on processing time and costs in integration by transferring only the newest records instead of the whole dataset again. Here is a scenario which explains the best one of the common usages of the snapshot feature. Let's just consider a very common situation: Two systems need to be synced and every change in the first system should be reflected in the second one. To keep both systems in instantaneous sync would not be almost unreachable but also be resource intensive. This is the reason both systems should connect in some time intervals. System one or the source will be POLLED for a new information at some specific intervals (usually every 3 minutes for ordinary flows) in order to transfer it to the System two. The question is how would our platform know when was the last sync and what was transferred from System one to System two during that last sync? It is doubtful that somebody would be constantly making notes of the changes in both systems, but it is a thought in the right direction. On our platform, it is possible to record the last state of your integration flow in the snapshot and read it before requesting for an update. Think about the snapshot as a piece of paper where only your last action is recorded - nothing more. Snapshot usage example: Querying objects by timestamp This is an algorithm to query objects from systems: 1 Query all objects since 1970, 1 Jan. This is done to establish the first point of reference. Usually, this is the stage of first global sync between two systems. 2 Iterate over all objects. In some cases this stage can take a while due to the amount of data in the source system, therefore it is recommended to use paging if the originating API supports it. 3 Calculate max update date based on returned objects. You must not calculate the max date as current time as you can miss some objects. This stage is also dependent on the originating API properties. 4 Emit the new max date as snapshot after iterating through all objects. This is the general algorithm, but how exactly we could achieve this? One distinct possibility is to first check for the snapshot existence. If it exists, write the first sync date to be 1970, 1st of January like this: params.snapshot = snapshot; if (!params.snapshot || typeof params.snapshot !== "string" || params.snapshot === '') {   // Empty snapshot       console.warn("Warning: expected string in snapshot but found : ", params.snapshot);       params.snapshot = new Date("1970-01-01T00:00:00.000Z").toISOString();   }    var newSnapshot = new Date().toISOString(); ... ... emitSnapshot(newSnapshot); Again one important fact would need to be considered that every time snapshot file needs to be overwritten and it should not be used as a temporary database of values since you are risking of breaching the upper limit for the snapshot (5KB) in no time. Next: How to create a Snapshot
__label__pos
0.58233
SciPy numpy.matrix.argmin matrix.argmin(axis=None, out=None)[source] Return the indices of the minimum values along an axis. Parameters :See `numpy.argmin` for complete descriptions. See also numpy.argmin Notes This is the same as ndarray.argmin, but returns a matrix object where ndarray.argmin would return an ndarray. Examples >>> x = -np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, -1, -2, -3], [ -4, -5, -6, -7], [ -8, -9, -10, -11]]) >>> x.argmin() 11 >>> x.argmin(0) matrix([[2, 2, 2, 2]]) >>> x.argmin(1) matrix([[3], [3], [3]])
__label__pos
0.828259
Increment ++ and Decrement -- Operator as Prefix and Postfix In this article, you will learn about the increment operator ++ and the decrement operator -- in detail with the help of examples. In programming (Java, C, C++, JavaScript etc.), the increment operator ++ increases the value of a variable by 1. Similarly, the decrement operator -- decreases the value of a variable by 1. a = 5 ++a; // a becomes 6 a++; // a becomes 7 --a; // a becomes 6 a--; // a becomes 5 Simple enough till now. However, there is an important difference when these two operators are used as a prefix and a postfix. ++ and -- operator as prefix and postfix • If you use the ++ operator as a prefix like: ++var, the value of var is incremented by 1; then it returns the value. • If you use the ++ operator as a postfix like: var++, the original value of var is returned first; then var is incremented by 1. The -- operator works in a similar way to the ++ operator except -- decreases the value by 1. Let's see the use of ++ as prefixes and postfixes in C, C++, Java and JavaScript. Example 1: C Programming #include <stdio.h> int main() { int var1 = 5, var2 = 5; // 5 is displayed // Then, var1 is increased to 6. printf("%d\n", var1++); // var2 is increased to 6 // Then, it is displayed. printf("%d\n", ++var2); return 0; } Example 2: C++ #include <iostream> using namespace std; int main() { int var1 = 5, var2 = 5; // 5 is displayed // Then, var1 is increased to 6. cout << var1++ << endl; // var2 is increased to 6 // Then, it is displayed. cout << ++var2 << endl; return 0; } Example 3: Java Programming class Operator { public static void main(String[] args) { int var1 = 5, var2 = 5; // 5 is displayed // Then, var1 is increased to 6. System.out.println(var1++); // var2 is increased to 6 // Then, var2 is displayed System.out.println(++var2); } } Example 4: JavaScript let var1 = 5, var2 = 5; // 5 is displayed // Then, var1 is increased to 6 console.log(var1++) // var2 is increased to 6 // Then, var2 is displayed console.log(++var2) The output of all these programs will be the same. Output 5 6 Did you find this article helpful?
__label__pos
0.96595
Group Policy Background Discussion in 'Windows Server Systems' started by 537789, Oct 18, 2006. 1. 537789 537789 OSNN One Post Wonder Messages: 2 Hey, I am having some trouble with my school background, I have the backgrounds on the school sever so when someone logs onto the computer the background shows up. But for some reasons there are times that the background wont fully load, it will try to load but only a small fraction of the background is visible. If anyone can help it will be much appreciated :)   2. madmatt madmatt Bow Down to the King Political User Messages: 13,312 Location: New York Are your wallpapers in BMP or JPG format?   3. 537789 537789 OSNN One Post Wonder Messages: 2 jpg   4. madmatt madmatt Bow Down to the King Political User Messages: 13,312 Location: New York When using JPG Windows caches the BMP under: C:\Documents and Settings\username\Local Settings\Application Data\Microsoft e.g. Wallpaper1.bmp I recommend using BMP files alongside Group Policy to prevent it being cached.  
__label__pos
0.970866
Thread Rating: • 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 Human-like catching? #1 When the bot is catching a pokemon does it still move in X km/h during the time we are trying to catch the pokemon? Or is the bot slowing down like a human being at like 80+% when trying to catch a pokemon? Let's say that I run the bot at 20km/h (biking), will the bot then slow down to 2km/h or even stay still while trying to catch the pokemon like a human would do or will the bot just continue to move in 20km/h while throwing an excellent curve ball? There is any setting that change this? There is any setting changing speed when spinning pokestops? Reply #2 The bot stops while it catches and then continues once it is caught. Same with Pokestops. Hope this helps! MutedFFS - Discord Admin / Forum Moderator - Botting since October 2016 - Team Instinct Reply Forum Jump: Users browsing this thread: 1 Guest(s) About PokeBot.Ninja The goal is to create the best Pokemon Go™ bot possible.               Quick Links               User Links
__label__pos
0.932738
What is the percentage increase/decrease from 5521 to 3437? Quickly work out the percentage increase or decrease from 5521 to 3437 in this step-by-step percentage calculator tutorial. (Spoiler alert: it's -37.75%!) So you want to work out the percentage increase or decrease from 5521 to 3437? Fear not, intrepid math seeker! Today, we will guide you through the calculation so you can figure out how to work out the increase or decrease in any numbers as a percentage. Onwards! In a rush and just need to know the answer? The percentage decrease from 5521 to 3437 is -37.75%. What is the % change from to Percentage increase/decrease from 5521 to 3437? An increase or decrease percentage of two numbers can be very useful. Let's say you are a shop that sold 5521 t-shirts in January, and then sold 3437 t-shirts in February. What is the percentage increase or decrease there? Knowing the answer allows you to compare and track numbers to look for trends or reasons for the change. Working out a percentage increase or decrease between two numbers is pretty simple. The resulting number (the second input) is 3437 and what we need to do first is subtract the old number, 5521, from it: 3437 - 5521 = -2084 Once we've done that we need to divide the result, -2084, by the original number, 5521. We do this because we need to compare the difference between the new number and the original: -2084 / 5521 = -0.37746785002717 We now have our answer in decimal format. How do we get this into percentage format? Multiply -0.37746785002717 by 100? Ding ding ding! We have a winner: -0.37746785002717 x 100 = -37.75% We're done! You just successfully calculated the percentage difference from 5521 to 3437. You can now go forth and use this method to work out and calculate the increase/decrease in percentage of any numbers. Head back to the percentage calculator to work out any more calculations you need to make or be brave and give it a go by hand. Hopefully this article has shown you that it's easier than you might think!
__label__pos
0.961505
Skip to main content Viewer event scripts Bring more interactivity to your dashboards by defining custom event handlers. For instance, you can make a REST call, and show additional information when a user double-clicks a point on a scatter plot. The event handler is defined as a JavaScript code that gets invoked when a particular event occurs. The code is saved in the viewer settings, so it gets saved along with the projects or view layouts. This is a beta feature To turn it on, check Settings | Beta | Allow Event Scripts To define an event handler: 1. Open viewer settings by clicking on the "gear" icon 2. Under the "Events" pane, find the event of interest 3. Edit the script, using the args and viewer variable to get event context. See examples below. The following picture demonstrates how a few custom events are automatically activated when the project is opened: Examples Scatter plot: On Rendered: rendering custom text args.sender.canvas.getContext('2d').fillText('foo', 100, 100); Scatter plot: On Point Clicked: show custom information when a point is clicked grok.shell.o = ui.divText('Clicked on ' + args.args.rowId); Grid: On Cell Clicked grok.shell.info('Clicked on ' + args.data.cell.column.name + ' : ' + args.data.cell.rowIndex) Bar chart: On Category Clicked: clicking on a category grok.shell.info(args.args.matchConditionStr);
__label__pos
0.808207
Permutations and Combinations Arranging Objects The number of ways of arranging n unlike objects in a line is n! (pronounced ‘n factorial’). n! = n × (n – 1) × (n – 2) ×…× 3 × 2 × 1 Example How many different ways can the letters P, Q, R, S be arranged? The answer is 4! = 24. This is because there are four spaces to be filled: _, _, _, _ The first space can be filled by any one of the four letters. The second space can be filled by any of the remaining 3 letters. The third space can be filled by any of the 2 remaining letters and the final space must be filled by the one remaining letter. The total number of possible arrangements is therefore 4 × 3 × 2 × 1 = 4! • The number of ways of arranging n objects, of which p of one type are alike, q of a second type are alike, r of a third type are alike, etc is: n!        . p! q! r! … Example In how many ways can the letters in the word: STATISTICS be arranged? There are 3 S’s, 2 I’s and 3 T’s in this word, therefore, the number of ways of arranging the letters are: 10!=50 400 3! 2! 3! Rings and Roundabouts • The number of ways of arranging n unlike objects in a ring when clockwise and anticlockwise arrangements are different is (n – 1)! When clockwise and anti-clockwise arrangements are the same, the number of ways is ½ (n – 1)! Example Ten people go to a party. How many different ways can they be seated? Anti-clockwise and clockwise arrangements are the same. Therefore, the total number of ways is ½ (10-1)! = 181 440 Combinations The number of ways of selecting r objects from n unlike objects is: Image Example There are 10 balls in a bag numbered from 1 to 10. Three balls are selected at random. How many different ways are there of selecting the three balls? 10C3 =10!=10 × 9 × 8= 120              3! (10 – 3)!3 × 2 × 1 Permutations A permutation is an ordered arrangement. • The number of ordered arrangements of r objects taken from n unlike objects is: nPr =       n!       .           (n – r)! Example In the Match of the Day’s goal of the month competition, you had to pick the top 3 goals out of 10. Since the order is important, it is the permutation formula which we use. 10P3 =10!             7! = 720 There are therefore 720 different ways of picking the top three goals. Probability The above facts can be used to help solve problems in probability. Example In the National Lottery, 6 numbers are chosen from 49. You win if the 6 balls you pick match the six balls selected by the machine. What is the probability of winning the National Lottery? The number of ways of choosing 6 numbers from 49 is 49C6 = 13 983 816 . Therefore the probability of winning the lottery is 1/13983816 = 0.000 000 071 5 (3sf), which is about a 1 in 14 million chance. Category sign up to revision world banner Slot
__label__pos
0.999832
user3157132 user3157132 - 4 months ago 16 Python Question Adding a dict as a value to another dict is overwriting the previous value I have a piece of python code like below ( I am sorry that I couldn't paste my actual code because its very big) final_dict = {} default_dict = some_data for dict in list_of_dicts: # I am getting list_of_dicts from a json file resultant_dict = merge_dicts(dict, default_dict) id = return_value_from_a_function(resultant_dict) final_dict[id] = resultant_dict # id will be different in each loop So the final_dict is supposed to have id's as keys and resultant_dict's as values. My problem is that at the end of the for loop, all my values in the final_dict are same as the last value of resultant_dict. I think it is overwriting the previous values (may be because its a reference). How to solve this issue..? EDIT 1: merge_dicts actually creates the union of two dicts. When I print resultant_dict, it prints different dict each time, as expected. But when I assign it as a value to final_dict, it is modifying all the previous values with the latest one. EDIT 2: All the input data is a dict which I am getting from a json file. The final dict should look something like below final_dict = { id1 : dict1, id2 : dict2 } But I am getting like below ( It is overwriting all the values with the latest dict value) final_dict = { id1 : dict2, id2 : dict2 } EDIT 3: This is how merge_dicts work def merge_dicts(tmp1, tmp2): ''' merges tmp2 into tmp1 ''' for key in tmp2: if key in tmp1: if isinstance(tmp1[key], dict) and isinstance(tmp2[key], dict): merge_dicts(tmp1[key], tmp2[key]) else : tmp1[key] = tmp2[key] else: tmp1[key] = tmp2[key] return tmp1 Answer Why don't you generate the id first and then straight away assign the merge_dicts value there? for dict in list_of_dicts: # I am getting list_of_dicts from a json file id = return_value_from_a_function final_dict[id] = merge_dicts(dict, default_dict) EDIT: Since return_value_from_a_function function makes use of resultant_dict, it seems the return_value_from_a_function modifies the resultant_dict. from copy import deepcopy for input_dict in list_of_dicts: resultant_dict = {} resultant_dict = merge_dicts(input_dict, default_dict) # I am getting list_of_dicts from a json file value_dict = deepcopy(resultant_dict) id = return_value_from_a_function(resultant_dict) final_dict[id] = value_dict
__label__pos
0.998313
Communication System Time Limit: 10 Seconds    Memory Limit: 32768 KB We have received an order from Pizoor Communications Inc. for a special communication system. The system consists of several devices. For each device, we are free to choose from several manufacturers. Same devices from two manufacturers differ in their maximum bandwidths and prices. By overall bandwidth (B) we mean the minimum of the bandwidths of the chosen devices in the communication system and the total price (P) is the sum of the prices of all chosen devices. Our goal is to choose a manufacturer for each device to maximize B/P. Input The first line of the input file contains a single integer t (1 <= t <= 10), the number of test cases, followed by the input data for each test case. Each test case starts with a line containing a single integer n (1 <= n <= 100), the number of devices in the communication system, followed by n lines in the following format: the i-th line (1 <= i <= n) starts with mi (1 <= mi <= 100), the number of manufacturers for the i-th device, followed by mi pairs of positive integers in the same line, each indicating the bandwidth and the price of the device respectively, corresponding to a manufacturer. Output Your program should produce a single line for each test case containing a single number which is the maximum possible B/P for the test case. Round the numbers in the output to 3 digits after decimal point. Sample Input 1 3 3 100 25 150 35 80 25 2 120 80 155 40 2 100 100 120 110 Sample Output 0.649 Submit Source: Asia 2002, Tehran (Iran), Iran Domestic
__label__pos
0.953439
// TestCopyArray3.java: Demonstrate copying arrays public class TestCopyArray3 { /** Main method */ public static void main(String[] args) { // Create an array and assign values int[] list1 = {0, 1, 2, 3, 4 ,5}; // Create an array with default values int[] list2 = new int[list1.length]; // Assign array list1 to array list2 list2 = list1; System.arraycopy (list1,0,list2,0,list1.length); // Display list1 and list2 System.out.println("Before modifying list1"); printList("list1 is ", list1); printList("list2 is ", list2); // Modify list1 for (int i = 0; i < list1.length; i++) list1[i] = 0; // Display list1 and list2 after modifying list1 System.out.println("\nAfter modifying list1"); printList("list1 is ", list1); printList("list2 is ", list2); } /** The method for printing a list */ public static void printList(String s, int[] list) { System.out.print(s + " "); for (int i = 0; i < list.length; i++) System.out.print(list[i] + " "); System.out.println(); } }
__label__pos
0.90165
Конференция Arenadata Новое время — новый Greenplum Мы приглашаем вас принять участие в конференции, посвященной будущему Open-Source Greenplum 19 сентября в 18:00:00 UTC +3. Встреча будет проходить в гибридном формате — и офлайн, и онлайн. Онлайн-трансляция будет доступна для всех желающих. Внезапное закрытие Greenplum его владельцем — компанией Broadcom - стало неприятным сюрпризом для всех, кто использует или планирует начать использовать решения на базе этой технологии. Многие ожидают выхода стабильной версии Greenplum 7 и надеются на её дальнейшее активное развитие. Arenadata не могла допустить, чтобы разрабатываемый годами Open-Source проект Greenplum прекратил своё существование, поэтому 19 сентября мы представим наш ответ на данное решение Broadcom, а участники сообщества получат исчерпывающие разъяснения на все вопросы о дальнейшей судьбе этой технологии. На конференции вас ждёт обсуждение следующих тем: • План возрождения Greenplum; • Дорожная карта; • Экспертное обсуждение и консультации. Осталось до события Work with Iceberg tables in Impala Apache Iceberg is an open, high-performance format for large analytic tables. The ADH Impala service adopts this format allowing you to work with Iceberg tables using SQL and perform analytics over them. Iceberg catalogs In Iceberg architecture, a catalog is a named repository of tables and their metadata. Using Impala, you can manage Iceberg tables in the following Iceberg catalogs: • HiveCatalog. The default catalog used by Impala. Uses Hive Metastore to store table metadata. This catalog is used in the examples throughout the article. • HadoopCatalog. A path-based catalog that assumes that there is a catalog location in HDFS where individual Iceberg tables are stored. • HadoopTables. A location-based catalog that assumes that there is a location in HDFS that contains several Iceberg tables. Also, you can use custom catalogs for existing tables. However, automatic metadata updates do not work for tables in a custom catalog. You have to manually call REFRESH on the table when it changes outside Impala. DDL commands Create a table To create an Iceberg table in the default HiveCatalog, append the STORED AS ICEBERG clause to the standard Impala CREATE TABLE statement. Example: CREATE TABLE default.impala_ice_test ( txn_id int, acc_id int, txn_value double, txn_date date) STORED AS ICEBERG; Run the DESCRIBE FORMATTED command to retrieve information about the newly created table: DESCRIBE FORMATTED default.impala_ice_test; Sample output Query: describe formatted impala_ice_test +------------------------------+----------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | name | type | comment | +------------------------------+----------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | # col_name | data_type | comment | | | NULL | NULL | | txn_id | int | NULL | | acc_id | int | NULL | | txn_value | double | NULL | | txn_date | date | NULL | | | NULL | NULL | | # Detailed Table Information | NULL | NULL | | Database: | default | NULL | | OwnerType: | USER | NULL | | Owner: | admin | NULL | | CreateTime: | Thu Jul 04 10:50:39 UTC 2024 | NULL | | LastAccessTime: | Sun Jan 25 08:15:21 UTC 1970 | NULL | | Retention: | 2147483647 | NULL | | Location: | hdfs://adh/apps/hive/warehouse/impala_ice_test | NULL | | Erasure Coding Policy: | NONE | NULL | | Table Type: | EXTERNAL_TABLE | NULL | | Table Parameters: | NULL | NULL | | | EXTERNAL | TRUE | | | OBJCAPABILITIES | EXTREAD,EXTWRITE | | | accessType | 8 | | | current-schema | {\"type\":\"struct\",\"schema-id\":0,\"fields\":[{\"id\":1,\"name\":\"txn_id\",\"required\":false,\"type\":\"int\"},{\"id\":2,\"name\":\"acc_id\",\"required\":false,\"type\":\"int\"},{\"id\":3,\"name\":\"txn_value\",\"required\":false,\"type\":\"double\"},{\"id\":4,\"name\":\"txn_date\",\"required\":false,\"type\":\"date\"}]} | | | engine.hive.enabled | true | | | external.table.purge | TRUE | | | metadata_location | hdfs://adh/apps/hive/warehouse/impala_ice_test/metadata/00000-c3406504-fe59-4464-8b1f-01971fc3b949.metadata.json | | | numFiles | 1 | | | snapshot-count | 0 | | | storage_handler | org.apache.iceberg.mr.hive.HiveIcebergStorageHandler | | | table_type | ICEBERG | | | totalSize | 1914 | | | transient_lastDdlTime | 1720090239 | | | uuid | 5495460c-53fe-4fb7-9120-0d5dad7fd022 | | | write.format.default | parquet | | | NULL | NULL | | # Storage Information | NULL | NULL | | SerDe Library: | org.apache.iceberg.mr.hive.HiveIcebergSerDe | NULL | | InputFormat: | org.apache.iceberg.mr.hive.HiveIcebergInputFormat | NULL | | OutputFormat: | org.apache.iceberg.mr.hive.HiveIcebergOutputFormat | NULL | | Compressed: | No | NULL | | Sort Columns: | [] | NULL | | | NULL | NULL | | # Constraints | NULL | NULL | +------------------------------+----------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ The table_type parameter in the output indicates that the table has been created as an Iceberg table. By default, Impala creates Iceberg tables using the Parquet data format. To use ORC or AVRO, set the table property write.format.default as shown in the following example: CREATE TABLE default.impala_ice_test_orc ( txn_id int, acc_id int, txn_value double, txn_date date) STORED AS ICEBERG TBLPROPERTIES('write.format.default'='ORC'); TIP More Iceberg table properties can be found in the Impala documentation. CREATE TABLE AS SELECT You can also use the CREATE TABLE AS <tbl_name> (CTAS) syntax to create an Iceberg table based on an existing table. CREATE TABLE default.impala_ice_test_ctas STORED AS ICEBERG AS SELECT txn_value, txn_date FROM default.impala_ice_test; Sample DESCRIBE FORMATTED output DESCRIBE FORMATTED default.impala_ice_test_ctas; +------------------------------+-----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | name | type | comment | +------------------------------+-----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | # col_name | data_type | comment | | | NULL | NULL | | txn_value | double | NULL | | txn_date | date | NULL | | | NULL | NULL | | # Detailed Table Information | NULL | NULL | | Database: | default | NULL | | OwnerType: | USER | NULL | | Owner: | admin | NULL | | CreateTime: | Thu Jul 04 11:07:31 UTC 2024 | NULL | | LastAccessTime: | Sun Jan 25 08:32:12 UTC 1970 | NULL | | Retention: | 2147483647 | NULL | | Location: | hdfs://adh/apps/hive/warehouse/impala_ice_test_ctas | NULL | | Erasure Coding Policy: | NONE | NULL | | Table Type: | EXTERNAL_TABLE | NULL | | Table Parameters: | NULL | NULL | | | EXTERNAL | TRUE | | | OBJCAPABILITIES | EXTREAD,EXTWRITE | | | accessType | 8 | | | current-schema | {\"type\":\"struct\",\"schema-id\":0,\"fields\":[{\"id\":1,\"name\":\"txn_value\",\"required\":false,\"type\":\"double\"},{\"id\":2,\"name\":\"txn_date\",\"required\":false,\"type\":\"date\"}]} | | | engine.hive.enabled | true | | | external.table.purge | TRUE | | | metadata_location | hdfs://adh/apps/hive/warehouse/impala_ice_test_ctas/metadata/00000-c3871ac4-ca86-4c6d-be6c-116f3bf7cdf9.metadata.json | | | numFiles | 1 | | | snapshot-count | 0 | | | storage_handler | org.apache.iceberg.mr.hive.HiveIcebergStorageHandler | | | table_type | ICEBERG | | | totalSize | 1531 | | | transient_lastDdlTime | 1720091251 | | | uuid | 5b792d0b-4e67-4a2b-957c-86e41970c977 | | | write.format.default | parquet | | | NULL | NULL | | # Storage Information | NULL | NULL | | SerDe Library: | org.apache.iceberg.mr.hive.HiveIcebergSerDe | NULL | | InputFormat: | org.apache.iceberg.mr.hive.HiveIcebergInputFormat | NULL | | OutputFormat: | org.apache.iceberg.mr.hive.HiveIcebergOutputFormat | NULL | | Compressed: | No | NULL | | Sort Columns: | [] | NULL | | | NULL | NULL | | # Constraints | NULL | NULL | +------------------------------+-----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Delete a table You can delete a table using the DROP TABLE <tbl_name> command. Example: DROP TABLE default.impala_ice_test; If the table property external.table.purge is set to true, the command also deletes the data files from HDFS. By default, the external.table.purge property is set to true when a table is created using the Impala’s CREATE TABLE command. If the table is created using CREATE EXTERNAL TABLE (the table already exists and gets registered in a catalog), this property is set to false, so DROP TABLE doesn’t remove table data, but only removes the table from the catalog. Iceberg V2 tables The Iceberg V2 table format supports row-level updates for DELETE and UPDATE operations using the "merge-on-read" approach. Using this format, instead of rewriting existing data files, Iceberg creates delete files that store information about the deleted records. These files actually contain file paths and positions of the deleted rows (position deletes). To create an Iceberg table that uses the Iceberg V2 format, specify the format-version=2 table property when creating a table. Example: CREATE TABLE default.impala_ice_test_v2 ( txn_id int, acc_id int, txn_value double, txn_date date) STORED AS ICEBERG TBLPROPERTIES('format-version'='2'); You can also upgrade an existing Iceberg V1 table to the Iceberg V2 version using ALTER TABLE. An example is shown below. ALTER TABLE default.impala_ice_test SET TBLPROPERTIES('format-version'='2'); Schema evolution Iceberg supports schema evolution that allows reordering, deleting, or changing columns without affecting the readability of old (pre-evolution) data files. NOTE Impala does not support schema evolution for AVRO-formatted tables. For more information, see Iceberg documentation. Below are Iceberg table schema evolution commands supported by Impala. ALTER TABLE …​ RENAME TO   The following command renames an Iceberg table. ALTER TABLE default.impala_ice_test RENAME TO default.impala_ice_test_new; ALTER TABLE …​ ADD COLUMN   You can add a column to an existing Iceberg table as shown below. ALTER TABLE default.impala_ice_test ADD COLUMN `comment` string; Sample output: +----------------------------+ | summary | +----------------------------+ | Column(s) have been added. | +----------------------------+ ALTER TABLE …​ DROP COLUMN   You can delete a column in an existing Iceberg table as shown below. ALTER TABLE default.impala_ice_test DROP COLUMN `comment`; Sample output: +--------------------------+ | summary | +--------------------------+ | Column has been dropped. | +--------------------------+ ALTER TABLE …​ CHANGE COLUMN   You can change a column name and the column type, assuming the new type is compatible with the old one. ALTER TABLE default.impala_ice_test CHANGE txn_id txn_id BIGINT; The output: +--------------------------+ | summary | +--------------------------+ | Column has been altered. | +--------------------------+ Create a partitioned table With Impala, you can create partitioned Iceberg tables using traditional value-based partitioning. The syntax is straightforward and is shown below: CREATE TABLE default.impala_ice_test_partitioned ( txn_id int, txn_value double, txn_date date) PARTITIONED BY (acc_id int) STORED AS ICEBERG; Also, you can create a partitioned table using one or more partition transforms. An example is shown below. CREATE TABLE default.impala_ice_test_partitioned_transform ( txn_id int, acc_id int, txn_value double, txn_date date) PARTITIONED BY SPEC ( BUCKET(3, acc_id), (1) day(txn_date)) (2) STORED AS ICEBERG; 1 The first part of the transformation expression instructs Iceberg to distribute inserted data among three buckets. For each acc_id value to be inserted, Iceberg calculates a hash and gets mod 3 result (hash(acc_id) mod 3) that specifies the destination bucket. 2 Then, within each of the 3 buckets, Iceberg creates partitions by date. Impala creates the following HDFS directory structure for a table created by the above command. [admin@ka-adh-2 ~]$ hdfs dfs -ls -R /apps/hive/warehouse drwxrwxr-x - impala hadoop 0 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform drwxrwxr-x - impala hadoop 0 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/data drwxrwxr-x - impala hadoop 0 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/data/acc_id_bucket=0 drwxrwxr-x - impala hadoop 0 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/data/acc_id_bucket=0/txn_date_day=2023-01-03 -rw-r--r-- 3 impala hadoop 1153 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/data/acc_id_bucket=0/txn_date_day=2023-01-03/314da83d47eee296-145c34e600000000_1592985449_data.0.parq drwxrwxr-x - impala hadoop 0 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/data/acc_id_bucket=1 drwxrwxr-x - impala hadoop 0 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/data/acc_id_bucket=1/txn_date_day=2023-01-02 -rw-r--r-- 3 impala hadoop 1153 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/data/acc_id_bucket=1/txn_date_day=2023-01-02/314da83d47eee296-145c34e600000000_584411471_data.0.parq drwxrwxr-x - impala hadoop 0 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/data/acc_id_bucket=2 drwxrwxr-x - impala hadoop 0 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/data/acc_id_bucket=2/txn_date_day=2023-01-01 -rw-r--r-- 3 impala hadoop 1153 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/data/acc_id_bucket=2/txn_date_day=2023-01-01/314da83d47eee296-145c34e600000000_1148089692_data.0.parq drwxrwxr-x - impala hadoop 0 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/metadata -rw-r--r-- 3 impala hadoop 2385 2024-07-05 14:00 /apps/hive/warehouse/impala_ice_test_partitioned_transform/metadata/00000-b04f03f7-f770-44a1-825b-8e84e4392f41.metadata.json -rw-r--r-- 3 impala hadoop 3564 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/metadata/00001-13512042-67f5-4327-933c-978d41be5df1.metadata.json -rw-r--r-- 3 impala hadoop 6539 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/metadata/24a8910e-e616-4d4b-ace5-46987f83fe9a-m0.avro -rw-r--r-- 3 impala hadoop 3804 2024-07-05 14:01 /apps/hive/warehouse/impala_ice_test_partitioned_transform/metadata/snap-4359281668985942129-1-24a8910e-e616-4d4b-ace5-46987f83fe9a.avro You can view partitions using the SHOW PARTITIONS command. A sample output is below. +----------------------------------------------+----------------+-----------------+ | Partition | Number Of Rows | Number Of Files | +----------------------------------------------+----------------+-----------------+ | {"acc_id_bucket":"0","txn_date_day":"19360"} | 1 | 1 | | {"acc_id_bucket":"0","txn_date_day":"19361"} | 1 | 1 | | {"acc_id_bucket":"0","txn_date_day":"19723"} | 1 | 1 | | {"acc_id_bucket":"0","txn_date_day":"19725"} | 1 | 1 | | {"acc_id_bucket":"1","txn_date_day":"19359"} | 1 | 1 | | {"acc_id_bucket":"1","txn_date_day":"19360"} | 1 | 1 | | {"acc_id_bucket":"2","txn_date_day":"19358"} | 1 | 1 | | {"acc_id_bucket":"2","txn_date_day":"19361"} | 1 | 1 | | {"acc_id_bucket":"2","txn_date_day":"19392"} | 1 | 1 | +----------------------------------------------+----------------+-----------------+ Iceberg supports partition evolution that allows changing a table’s partitioning spec even without the need of rewriting existing data files. You can change a table’s partitioning by using the ALTER TABLE SET PARTITION SPEC statement as shown in the example below. ALTER TABLE default.impala_ice_test_partitioned SET PARTITION SPEC (DAY(txn_date)); The output: +-------------------------+ | summary | +-------------------------+ | Updated partition spec. | +-------------------------+ DML commands Insert data With Impala, you can write data to an Iceberg table using the INSERT INTO and INSERT OVERWRITE commands. Example: INSERT INTO default.impala_ice_test VALUES (1, 1001, 150.50, '2023-01-04'), (2, 1002, 200.50, '2023-01-03'), (3, 1003, 50.00, '2024-01-03'); Using INSERT OVERWRITE, you can replace data in a table with the result of a query. For example: INSERT OVERWRITE TABLE default.impala_ice_test_tmp SELECT * from default.impala_ice_test; NOTE INSERT OVERWRITE is not allowed for tables that use the BUCKET partition transform. Delete data You can delete data from Iceberg V2 tables using the DELETE command. Example: DELETE FROM default.impala_ice_test WHERE txn_id = 1; NOTE The DELETE operation is supported only for Iceberg V2 tables. Attempting to delete rows in a V1 table throws an exception. Update data With Impala, you can update Iceberg V2 tables as shown in the example below. UPDATE default.impala_ice_test SET txn_value = txn_value + 100; NOTE The UPDATE operation is supported only for Iceberg V2 tables. Attempting to update rows in a V1 table throws an exception. You can also use the UPDATE FROM statement to update a target Iceberg table based on a source table (the source can be a non-Iceberg table). Example: UPDATE default.impala_ice_test SET default.impala_ice_test.acc_id = src.acc_id, default.impala_ice_test.txn_date = src.txn_date, FROM default.impala_ice_test, default.impala_ice_test_src src WHERE default.impala_ice_test.txn_id = src.txn_internal_id; If there are multiple matches in the WHERE clause, Impala throws an error. There are several limitations on using the UPDATE FROM command, like Parquet-only format support, updating partitioned columns issues, and alike. For more information, see Impala documentation. Time travel Iceberg tables support the time travel feature that allows you to query data from a specific table snapshot that was created at some point in the past and can be referenced by an ID or timestamp. You can run time travel queries using the FOR SYSTEM_TIME AS OF <timestamp> and FOR SYSTEM_VERSION AS OF <snapshot-id> clauses. Several usage examples are presented below. The following example queries data from the closest snapshot that is older than the specified timestamp: SELECT * FROM default.impala_ice_test FOR SYSTEM_TIME AS OF '2024-07-02 12:00:00'; The result: +--------+--------+-----------+------------+ | txn_id | acc_id | txn_value | txn_date | +--------+--------+-----------+------------+ | 2 | 1002 | 63.5 | 2024-02-04 | | 1 | 1001 | 110.0 | 2023-01-01 | +--------+--------+-----------+------------+ The next query identifies the snapshot to work with by subtracting time units from the current time: SELECT * FROM default.impala_ice_test FOR SYSTEM_TIME AS OF now() - interval 1 minute; The output: +--------+--------+-----------+------------+ | txn_id | acc_id | txn_value | txn_date | +--------+--------+-----------+------------+ | 2 | 1002 | 63.5 | 2024-02-04 | +--------+--------+-----------+------------+ The following query retrieves data from a snapshot explicitly identified by ID: SELECT * FROM default.impala_ice_test FOR SYSTEM_VERSION AS OF 7308000224696874146; The result: +--------+--------+-----------+------------+ | txn_id | acc_id | txn_value | txn_date | +--------+--------+-----------+------------+ | 1 | 1001 | 10.0 | 2023-01-01 | +--------+--------+-----------+------------+ To get information about the snapshots available for a given table, use the DESCRIBE HISTORY command as shown in the following example: DESCRIBE HISTORY default.impala_ice_test; Sample output: +-------------------------------+---------------------+---------------------+---------------------+ | creation_time | snapshot_id | parent_id | is_current_ancestor | +-------------------------------+---------------------+---------------------+---------------------+ | 2024-04-07 21:41:56.959000000 | 7308000224696874146 | NULL | TRUE | | 2024-04-07 21:42:19.537000000 | 3739750708916322181 | 7308000224696874146 | TRUE | | 2024-04-07 22:04:58.577000000 | 3634103131974849338 | 3739750708916322181 | TRUE | +-------------------------------+---------------------+---------------------+---------------------+ Roll back Whenever an Iceberg table gets modified, Iceberg creates a snapshot for that table. Using snapshots, you can roll back the table state to some version in the past labeled by the snapshot ID. When a rollback is done for a table, a new snapshot is created with the same snapshot ID, but with a new creation timestamp. By using the ALTER TABLE <tbl_name> EXECUTE ROLLBACK statement, you can roll back a table to a previous snapshot as shown below. For example, assume there is an Iceberg table with the following contents: +--------+--------+-----------+------------+ | txn_id | acc_id | txn_value | txn_date | +--------+--------+-----------+------------+ | 2 | 1002 | 63.5 | 2024-02-04 | | 1 | 1001 | 110.0 | 2023-01-01 | +--------+--------+-----------+------------+ The following command rolls back the table state to a specific snapshot using the snapshot ID: ALTER TABLE default.impala_ice_test EXECUTE ROLLBACK(7308000224696874146); After executing this command, running SELECT returns a result set the table used to store at the moment of creation of the specified snapshot. For example: +--------+--------+-----------+------------+ | txn_id | acc_id | txn_value | txn_date | +--------+--------+-----------+------------+ | 1 | 1001 | 10.0 | 2023-01-01 | +--------+--------+-----------+------------+ The following command performs a rollback to a snapshot whose creation timestamp is older than the specified timestamp: ALTER TABLE default.impala_ice_test EXECUTE ROLLBACK('2024-07-05 12:00:00'); By default, Iceberg accumulates snapshots until they are deleted by a user command. You can expire unnecessary snapshots by using the ALTER TABLE …​ EXECUTE expire_snapshots(<timestamp>) statement. Examples: ALTER TABLE default.impala_ice_test EXECUTE expire_snapshots('2024-07-05 12:00:00'); ALTER TABLE default.impala_ice_test EXECUTE expire_snapshots(now() - interval 10 days); Iceberg data types mapping The following table describes the mapping of Iceberg data types and SQL types in Impala. Iceberg type SQL type in Impala boolean BOOLEAN int INTEGER long BIGINT float FLOAT double DOUBLE decimal(P, S) DECIMAL(P, S) date DATE time  —  timestamp TIMESTAMP timestamptz Only read support via TIMESTAMP string STRING uuid  —  fixed(L)  —  binary  —  struct STRUCT (read only) list ARRAY (read only) map MAP (read only) Found a mistake? Seleсt text and press Ctrl+Enter to report it
__label__pos
0.724166
Questions tagged [redux-saga] The tag has no usage guidance. Filter by Sorted by Tagged with 0 votes 0 answers 86 views React Redux chain-of-actions pattern - should I create new synchronous middleware for synchronous actions? I am creating React Web application with Yii2 API and Redux-Saga which calls Yii2 API (asynchronously). I am stuck with one especially large and complex user action (initiated by the selection of good,... user avatar • 1,009 0 votes 1 answer 784 views "Fetch the data if it doesn't exist" selector? Let's say I have a react component like this: const MyComponent = ({data}) => (<div>{JSON.stringify(data)}</div>); const myReduxSelector = (state, id) => state.someData[id]; ... user avatar • 2,463 1 vote 2 answers 1k views When using Redux/Redux-Saga - should JWTs be set in the action creators/sagas? Almost every blog post I've encountered around generic auth handling using JWTs in a React/Redux/Saga application does the same thing, which is to store the JWT in local storage, in the action/saga. ... user avatar • 2,463 5 votes 1 answer 7k views Redux LOADING/SUCCESS/ERROR pattern - when using redux-saga On a typical web app, we commonly have to deal the LOADING/SUCCESS/ERROR problem - which is that: When I make a backend request I want to display a loading cursor, perhaps disable some buttons. ... user avatar • 2,463  
__label__pos
0.563476
Sending e-mail from random IP hape Verified User Joined Jul 22, 2010 Messages 80 Location Poland [EXIM] Sending e-mails from random IP Description: sometimes we have a server with a few IP addresses and we want to send e-mails not from one IP. So we will configure exim for sending e-mails randomly from IP range with different PTRs and helo for every IP. Assumptions: • exim • more than one public IP address • may not work with IPv4 and IPv6 together For this article we will use this subnet and revdns as examples: Remamber to set valid PTR(revdns) and A records for IP. IPv4RevDSN/Helo 127.10.0.1ip1.example.net 127.10.0.2ip2.example.net 127.10.0.3ip3.example.net 127.10.0.4ip4.example.net How to: 1. Login to server on a console as root or user with sudo access 2. Create file with helo mapping to IP: /etc/exim.helo.conf with: Code: 127.10.0.1:ip1.example.net 127.10.0.2:ip2.example.net 127.10.0.3:ip3.example.net 127.10.0.4:ip4.example.net 3. Open /etc/exim.conf and find: Code: remote_smtp: driver = smtp add below: Code: .include_if_exists /etc/exim.ip.conf 4. Create file /etc/exim.ip.conf with: Code: interface = "${perl{randinet}}" helo_data = ${lookup{$sending_ip_address}lsearch*{/etc/exim.helo.conf}{$value} {$primary_hostname}} 5. Edit file /etc/exim.pl, and add below existing content: Code: # Random ip selection sub randinet { @inet = ("127.10.0.1","127.10.0.2","127.10.0.3","127.10.0.4"); return $inet[int rand($#inet+1)]; } 6. Create file /usr/local/directadmin/data/templates/custom/dns_txt.conf with: Code: |DOMAIN|.="v=spf1 a mx ip4:127.10.0.1 ip4:127.10.0.2 ip4:127.10.0.3 ip4:127.10.0.4 -all" You can also use ~all insted off -all - it depends on your needs, for more use Google. 7. Restart exim: Code: service exim restart 8. Check you have no errors. That's all. We use it in our company and it works for many IP addresses. Some configs aren't mine, but I think that they aren't copyrighted :) FAQ 1. How to modify existing SPF records? It's not possible to use DirectAdmin task system to rewrite bind/named zones, so you can use this script to perform that operation: Code: #!/bin/bash OLD="v=spf1 a mx ip4:YOUR_OLD_IP ~all" NEW="v=spf1 a mx ip4:127.10.0.1 ip4:127.10.0.2 ip4:127.10.0.3 ip4:127.10.0.4 -all" DPATH="/var/named/*.db" BPATH="/root/backup_DNS" TFILE="/tmp/out.tmp.$$" [ ! -d $BPATH ] && mkdir -p $BPATH || : for f in $DPATH do if [ -f $f -a -r $f ]; then /bin/cp -f $f $BPATH sed "s/$OLD/$NEW/g" "$f" > $TFILE && mv $TFILE "$f" else echo "Error: Cannot read $f" fi done and restart named Code: service named restart All operations has been performed on Centos 6. Remember to make backups and change IPs from 127.10.0.x to your own. And sorry for my English.   Last edited: ozgurerdogan Verified User Joined Apr 20, 2008 Messages 312 thank you. a client sends many mails (not spam mails) to hotmail and hotmail blocks ips. could this be a workaround?   Top
__label__pos
0.985253
openfermion.measurements.pair_within_simultaneously Generates simultaneous pairings between four-element combinations A pairing of a list is a set of pairs of list elements. E.g. a pairing of labels = [1, 2, 3, 4, 5, 6, 7, 8] could be [(1, 2), (3, 4), (5, 6), (7, 8)] (Note that we insist each element only appears in a pairing once; the following is not a pairing: [(1, 1), (2, 2), (3, 4), (5, 6), (7, 8)] This function generates a set of pairings such that for every four elements (i,j,k,l) in 'labels', there exists one pairing containing both (i,j) and (k,l) labels list list of elements to be paired pairings tuple of pairs the desired pairings
__label__pos
0.875005
The Art of Data Visualization: Transforming Complex Information into Clear Insights Mack John The ability to convey information effectively is paramount in today's data-driven world. Data visualization, often called the art of turning complex data into clear and meaningful insights, plays a central role in this endeavor. This article will take you through the fascinating world of data visualization, from understanding its significance to exploring the tools, techniques, and real-world applications that make it a crucial skill for decision-makers and data professionals. Why Data Visualization Matters Understanding the Significance of Data Visualization In its raw form, data can be overwhelming and challenging to comprehend. Data visualization bridges this gap by transforming abstract numbers and statistics into visual representations that are easier to grasp and more compelling. In this section, we delve into why data visualization matters and how it aids decision-makers in gaining deeper insights from complex datasets. The Science Behind Data Visualization Exploring the Cognitive Principles of Effective Visualization Data visualization is not just about creating pretty charts and graphs; it's a science rooted in cognitive psychology. We explore the principles that underpin effective data visualization, including how our brains process visual information and why certain types of visualizations work better than others. Types of Data Visualizations Charts and Graphs: Unveiling Patterns How Different Chart Types Reveal Specific Data Patterns and Insights Charts and graphs come in various forms, each tailored to unveil specific data patterns. From line graphs showing trends over time to pie charts representing proportions, we dissect the types of charts and diagrams commonly used in data visualization. You'll understand when and how to use each variety to convey your data's message effectively. Maps and Geographic Data Visualization Utilizing Maps to Convey Geographical Insights and Trends Maps are powerful tools for displaying geographical data and trends. This section explores how geographic data visualization helps organizations uncover insights related to location, demographics, and spatial relationships. We'll showcase examples of map-based visualizations that have significantly impacted various industries. Tools and Techniques Choosing the Right Data Visualization Tool A Guide to Selecting the Best Data Visualization Tool for Your Data With many data visualization tools available, selecting the right one for your needs can be daunting. We provide a comprehensive guide to help you navigate the options and make informed choices based on your data, objectives, and skill level. Data Visualization Best Practices Tips and Techniques for Creating Compelling and Informative Visualizations Creating effective data visualizations requires more than just dragging and dropping data onto a chart. We delve into best practices, offering practical tips and techniques to help you create compelling and informative visualizations that resonate with your audience. Data Visualization in Action Visualizing Big Data: Handling Vast Amounts of Information Strategies for Effectively Presenting Extensive Datasets As the volume of data grows, the ability to visualize big data becomes increasingly critical. We explore strategies and tools that allow data professionals to tackle massive datasets and extract meaningful insights. Discover how organizations are managing and visualizing vast amounts of information. https://img.particlenews.com/image.php?url=2Cng8E_0oQwEIJF00 IOTPhoto byGoogle Real-World Examples of Data Visualization Success Case Studies Showcasing the Benefits of Data Visualization The power of data visualization is best demonstrated through real-world examples. We present case studies highlighting how organizations have leveraged data visualization to solve complex problems, drive decision-making, and achieve tangible results. Interactivity and Engagement Interactive Data Dashboards: Engaging Your Audience Leveraging Interactivity to Enhance User Engagement and Understanding Static visualizations are informative, but interactive data dashboards take engagement to the next level. Learn how to create dynamic dashboards that allow users to explore data on their terms, fostering more profound understanding and engagement. Data Visualization for Decision-Making Data-Driven Decision-Making with Visualizations How Data Visualization Aids in Making Informed Decisions Informed decision-making is at the core of data visualization's purpose. We explore how data professionals and decision-makers use visualizations to gain insights, identify trends, and make informed choices across various industries. SPLK-1002: Enhancing Data Visualization Proficiency The Role of SPLK-1002 Certification in Advancing Your Data Visualization Skills For those looking to elevate their data visualization proficiency, the SPLK-1002 certification offers a structured path to mastery. Discover how this certification program can enhance your skills and open doors to new opportunities in the dynamic field of data visualization. As we embark on this exploration of data visualization, you'll gain a comprehensive understanding of its significance, principles, and practical applications. Whether you're a data scientist, analyst, or decision-maker, the knowledge and insights gleaned from this article will empower you to transform complex information into clear, actionable insights through data visualization. This is original content from NewsBreak’s Creator Program. Join today to publish and share your own content. Comments / 0 Published by Entrepreneur. Devoted food evangelist. Bacon expert. Freelance pop culture fan. Communicator. New York, NY 88 followers More from Mack John Comments / 0
__label__pos
0.929184
123 Pen Settings CSS Base Vendor Prefixing Add External Stylesheets/Pens Any URL's added here will be added as <link>s in order, and before the CSS in the editor. If you link to another Pen, it will include the CSS from that Pen. If the preprocessor matches, it will attempt to combine them before processing. + add another resource You're using npm packages, so we've auto-selected Babel for you here, which we require to process imports and make it all work. If you need to use a different JavaScript preprocessor, remove the packages in the npm tab. Add External Scripts/Pens Any URL's added here will be added as <script>s in order, and run before the JavaScript in the editor. You can use the URL of any other Pen and it will include the JavaScript from that Pen. + add another resource Use npm Packages We can make npm packages available for you to use in your JavaScript. We use webpack to prepare them and make them available to import. We'll also process your JavaScript with Babel. ⚠️ This feature can only be used by logged in users. Code Indentation       Save Automatically? If active, Pens will autosave every 30 seconds after being saved once. Auto-Updating Preview If enabled, the preview panel updates automatically as you code. If disabled, use the "Run" button to update. HTML Settings Here you can Sed posuere consectetur est at lobortis. Donec ullamcorper nulla non metus auctor fringilla. Maecenas sed diam eget risus varius blandit sit amet non magna. Donec id elit non mi porta gravida at eget metus. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <title>untitled - Paint</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" type="text/css" media="screen" href="main.css" /> <link href="https://fonts.googleapis.com/css?family=Lato" rel="stylesheet"> <script defer src="https://use.fontawesome.com/releases/v5.6.3/js/all.js" integrity="sha384-EIHISlAOj4zgYieurP0SdoiBYfGJKkgWedPHH4jCzpCXLmzVsw1ouK59MuUtP4a1" crossorigin="anonymous"></script> <!-- Scripts and Links --> </head> <body> <div class="grid"> <div class="upbar"> <a class="upbar-opt" href="#">File</a> <a class="upbar-opt" href="#">Edit</a> <a class="upbar-opt" href="#">View</a> <a class="upbar-opt" href="#">Image</a> <a class="upbar-opt" href="#">Colors</a> <a class="upbar-opt" href="#">Help</a> </div> <div class="sidebar"> <div id="item1" class="item"> <img src="https://i.postimg.cc/N0XtfWcz/1.png" alt=""> </div> <div id="item2" class="item"> <img src="https://i.postimg.cc/k4FdWKDN/2.png" alt=""> </div> <div id="item3" class="item"> <img src="https://i.postimg.cc/prWMCrnw/3.png" alt=""> </div> <div id="item4" class="item"> <img src="https://i.postimg.cc/HWzgxwqd/4.png" alt=""> </div> <div id="item5" class="item"> <img src="https://i.postimg.cc/SRgFv46J/5.png" alt=""> </div> <div id="item6" class="item"> <img src="https://i.postimg.cc/sgkF8wh9/6.png" alt=""> </div> <div id="item7" class="item active"> <img src="https://i.postimg.cc/bNFf0zY5/7.png" alt=""> </div> <div id="item8" class="item"> <img src="https://i.postimg.cc/ncLtQsx7/8.png" alt=""> </div> <div id="item9" class="item"> <img src="https://i.postimg.cc/0yMRcLby/9.png" alt=""> </div> <div id="item10" class="item"> <img src="https://i.postimg.cc/LsjdMLkd/10.png" alt=""> </div> <div id="item11" class="item"> <img src="https://i.postimg.cc/Y975208v/11.png" alt=""> </div> <div id="item12" class="item"> <img src="https://i.postimg.cc/xCPWXQFt/12.png" alt=""> </div> <div id="item13" class="item"> <img src="https://i.postimg.cc/8cPxWY2D/13.png" alt=""> </div> <div id="item14" class="item"> <img src="https://i.postimg.cc/J06vQq8X/14.png" alt=""> </div> <div id="item15" class="item"> <img src="https://i.postimg.cc/ryQBc84p/15.png" alt=""> </div> <div id="item16" class="item"> <img src="https://i.postimg.cc/GhtZmZPx/16.png" alt=""> </div> </div> <div class="canvas"> <div class="canvas-white"> <!--CANVAS--> </div> </div> <div class="bottombar"> <div class="cont"> <div class="selection"> <div class="selected" style="background-color: black"></div> <div class="selected2" style="background-color: white"></div> </div> <div class="colors"> <div class="wut colors--color1" style="background-color: rgba(0,0,0)"></div> <div class="wut colors--color2" style="background-color: rgb(125,125,125)"></div> <div class="wut colors--color3" style="background-color: rgb(120,0,0)"></div> <div class="wut colors--color4" style="background-color: rgb(120,120,0)"></div> <div class="wut colors--color5" style="background-color: rgb(0,120,0)"></div> <div class="wut colors--color6" style="background-color: rgb(0,0,120)"></div> <div class="wut colors--color7" style="background-color: rgb(120,0,120)"></div> <div class="wut colors--color8" style="background-color: rgb(120,120,60)"></div> <div class="wut colors--color9" style="background-color: rgb(0,120,60)"></div> <div class="wut colors--color10" style="background-color: rgb(0,60,120)"></div> <div class="wut colors--color11" style="background-color: rgb(0,60,60)"></div> <div class="wut colors--color12" style="background-color: rgb(60,120,120)"></div> <div class="wut colors--color13" style="background-color: rgb(60,60,120)"></div> <div class="wut colors--color14" style="background-color: rgb(120,60,30)"></div> <div class="wut colors--color15" style="background-color: rgb(255,255,255)"></div> <div class="wut colors--color16" style="background-color: rgb(190,190,190)"></div> <div class="wut colors--color17" style="background-color: rgb(255,0,0)"></div> <div class="wut colors--color18" style="background-color: rgb(255,255,0)"></div> <div class="wut colors--color19" style="background-color: rgb(0,255,0)"></div> <div class="wut colors--color20" style="background-color: rgb(0,0,255)"></div> <div class="wut colors--color21" style="background-color: rgb(255,0,255)"></div> <div class="wut colors--color22" style="background-color: rgb(255,255,120)"></div> <div class="wut colors--color23" style="background-color: rgb(0,255,120)"></div> <div class="wut colors--color24" style="background-color: rgb(0,120,255)"></div> <div class="wut colors--color25" style="background-color: rgb(0,120,120)"></div> <div class="wut colors--color26" style="background-color: rgb(120,255,255)"></div> <div class="wut colors--color27" style="background-color: rgb(120,120,255)"></div> <div class="wut colors--color28" style="background-color: rgb(255,120,60)"></div> </div> </div> </div> <div class="bottominfo"><div class="a">For Help, click Help Topics on the Help Menu.</div> <div class="b"> <a id="name"href="https://twitter.com/Jorgert1205" target="_blank">by Jorge Reyes</a> </div> </div> </div> <!-- SCRIPT --> <script type="text/javascript" src="app.js"></script> </body> </html> ! *, *::before, *::after { padding: 0; margin: 0; } body { font-family: Arial, Helvetica, sans-serif; font-size: .75rem; } .grid { width: 100%; height: 100vh; display: grid; grid-template-columns: auto 1fr; grid-template-rows: min-content 1fr 40px min-content; background-color: green; } .upbar { grid-area: 1 / 1 / 1 / -1; background-color: rgb(192,192,192); padding: .2rem; &-opt:link, &-opt:visited { color: black; text-decoration: none; padding: 2px; &:hover { box-shadow: 1px 1px rgba(black, 1), inset -1px -1px rgba(grey, 1), inset 1px 1px rgba(white, 1); } &:active { box-shadow: inset 1px 1px rgba(black, 1), inset 2px 2px rgba(grey, 1), 1px 1px rgba(white, 1); } } } .sidebar { grid-column: 1 / 2; grid-row: 2 / -1; background-color: rgb(192,192,192); padding: 0 .2rem; display: grid; grid-template-columns: repeat(2, max-content); grid-gap: 1px; justify-content: center; align-content: start; } .item { display: flex; justify-content: center; align-items: center; cursor: pointer; -moz-user-select: none; background-color: rgb(192,192,192); box-shadow: 1px 1px rgba(black, 1), inset -1px -1px rgba(grey, 1), inset 1px 1px rgba(white, 1); width: 23px; height: 23px; } .active { box-shadow: inset 1px 1px rgba(black, 1), inset 2px 2px rgba(grey, 1), 1px 1px rgba(white, 1); background-image: repeating-linear-gradient(to left bottom, rgba(white, .8), rgba(white, .8) .25px, rgba(grey, .1) .25px, rgba(grey, .1) .5px); } .canvas { background-color: rgb(123,123,123); box-shadow: inset 1px 1px rgba(#343434, 1), inset -1px -1px rgba(grey, 1); padding: .3rem; &-white { background-color: white; width: 90%; height: 80%; display: flex; justify-content: center; align-items: center; } } .bottombar { grid-area: 3 / 1 / 3 / -1; background-color: rgb(192,192,192); padding: .5rem .4rem .5rem .2rem; } .bottominfo { grid-area: 4 / 1 / 5 / -1; background-color: rgb(192,192,192); padding: .2rem; border-top: 1px solid rgba(#343434, .5); box-shadow: inset 0px 1px rgb(192,192,192), inset 0px 2px rgba(#343434, .5); display: flex; justify-content: space-between; } .selection { width: 30px; height: 30px; background-image: repeating-linear-gradient(to left bottom, rgba(white, .8), rgba(white, .8) 1px, rgba(rgb(172, 172, 172), .02) 1px, rgba(grey, .02) 2px); box-shadow: inset 1px 1px rgba(grey, 1), inset 2px 2px rgba(black, 1), 1px 1px rgba(white, 1); margin-right: 2px; position: relative; .selected { background-color: black; position: absolute; width: 12px; height: 12px; top: 15%; left: 15%; border: 1px solid lightgray; box-shadow: 1px 1px rgba(#232323, 1), 1px 1px rgba(white, 1); z-index: 1; cursor: pointer; } .selected2 { background-color: white; position: absolute; width: 12px; height: 12px; bottom: 15%; right: 15%; border: 1px solid lightgray; box-shadow: 1px 1px rgba(#232323, 1), 1px 1px rgba(white, 1); cursor: pointer; } } .cont { display: flex; transform: translateY(-.3rem); } .wut { margin: 1px; box-shadow: inset 1px 1px rgba(grey, 1), inset 2px 2px rgba(black, 1), 1px 1px rgba(white, 1); cursor: pointer; } .colors { display: grid; grid-template-columns: repeat(14, 15px); grid-template-rows: 15px; &--color1 {background-color: black;} &--color2 {background-color: grey;} &--color3 {background-color: violet;} &--color4 {background-color: red;} &--color5 {background-color: violet;} &--color6 {background-color: pink;} &--color7 {background-color: orange;} &--color8 {background-color: olive;} &--color9 {background-color: lightgreen;} &--color10 {background-color: burlywood;} &--color11 {background-color: gold;} &--color12 {background-color: yellowgreen;} &--color13 {background-color: darkcyan;} &--color14 {background-color: darkgoldenrod;} &--color15 {background-color: white;} &--color16 {background-color: lightgrey;} &--color17 {background-color: saddlebrown;} &--color18 {background-color: indianred;} &--color19 {background-color: indigo;} &--color20 {background-color: palegreen;} &--color21 {background-color: peachpuff;} &--color22 {background-color: peru;} &--color23 {background-color: coral;} &--color24 {background-color: cornflowerblue;} &--color25 {background-color: salmon;} &--color26 {background-color: hotpink;} &--color27 {background-color: khaki;} &--color28 {background-color: green;} } #name:link, #name:visited{ color: black; text-decoration: none; } ! var item1 = document.getElementById('item1'); var item2 = document.getElementById('item2'); var item3 = document.getElementById('item3'); var item4 = document.getElementById('item4'); var item5 = document.getElementById('item5'); var item6 = document.getElementById('item6'); var item7 = document.getElementById('item7'); var item8 = document.getElementById('item8'); var item9 = document.getElementById('item9'); var item10 = document.getElementById('item10'); var item11 = document.getElementById('item11'); var item12 = document.getElementById('item12'); var item13 = document.getElementById('item13'); var item14 = document.getElementById('item14'); var item15 = document.getElementById('item15'); var item16 = document.getElementById('item16'); var items = [item1, item2, item3, item4, item5, item6, item7, item8, item9, item10, item11, item12, item13, item14, item15, item16]; item1.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item2.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item3.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item4.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item5.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item6.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item7.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item8.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item9.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item10.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item11.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item12.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item13.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item14.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item15.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); item16.addEventListener('click', function(){ for(var i = 0; i<items.length; i++) { items[i].classList.remove('active'); } this.classList.add('active'); }); // COLORS var color1 = document.querySelector('.colors--color1'); var color2 = document.querySelector('.colors--color2'); var color3 = document.querySelector('.colors--color3'); var color4 = document.querySelector('.colors--color4'); var color5 = document.querySelector('.colors--color5'); var color6 = document.querySelector('.colors--color6'); var color7 = document.querySelector('.colors--color7'); var color8 = document.querySelector('.colors--color8'); var color9 = document.querySelector('.colors--color9'); var color10 = document.querySelector('.colors--color10'); var color11 = document.querySelector('.colors--color11'); var color12 = document.querySelector('.colors--color12'); var color13 = document.querySelector('.colors--color13'); var color14 = document.querySelector('.colors--color14'); var color15 = document.querySelector('.colors--color15'); var color16 = document.querySelector('.colors--color16'); var color17 = document.querySelector('.colors--color17'); var color18 = document.querySelector('.colors--color18'); var color19 = document.querySelector('.colors--color19'); var color20 = document.querySelector('.colors--color20'); var color21 = document.querySelector('.colors--color21'); var color22 = document.querySelector('.colors--color22'); var color23 = document.querySelector('.colors--color23'); var color24 = document.querySelector('.colors--color24'); var color25 = document.querySelector('.colors--color25'); var color26 = document.querySelector('.colors--color26'); var color27 = document.querySelector('.colors--color27'); var color28 = document.querySelector('.colors--color28'); var selected = document.querySelector('.selected'); var selected2 = document.querySelector('.selected2'); selected.addEventListener('click', function(){ var aux = selected.style.backgroundColor; this.style.backgroundColor = selected2.style.backgroundColor; selected2.style.backgroundColor = aux; }); selected2.addEventListener('click', function(){ var aux = selected2.style.backgroundColor; this.style.backgroundColor = selected.style.backgroundColor; selected.style.backgroundColor = aux; }); color1.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color2.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color3.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color4.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color5.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color6.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color7.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color8.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color9.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color10.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color11.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color12.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color13.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color14.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color15.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color16.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color17.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color18.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color19.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color20.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color21.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color22.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color23.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color24.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color25.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color26.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color27.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); color28.addEventListener('click', function(){ selected.style.backgroundColor = this.style.backgroundColor; }); ! 999px 🕑 One or more of the npm packages you are using needs to be built. You're the first person to ever need it! We're building it right now and your preview will start updating again when it's ready. Console
__label__pos
0.864346
Jacksonkr's user avatar Jacksonkr's user avatar Jacksonkr's user avatar Jacksonkr • Member for 11 years, 9 months • Last seen more than a month ago 7 votes 5 answers 13k views rewrite rules and querystring 4 votes 2 answers 2k views WP JSON list all permalinks 2 votes 0 answers 355 views Plugin action rewrite rule - non_wp_rules 1 vote 1 answer 1k views htaccess modify headers IF url ends with "news" 1 vote 2 answers 119 views Grabbing specific content 1 vote 1 answer 172 views getting a js file for one page 1 vote 3 answers 122 views custom post type search by reference id 1 vote 4 answers 5k views Why isn't this youtube shortcode working? 1 vote 0 answers 2k views Get all custom fields with wp's get_posts() 0 votes 1 answer 178 views WP Plugin permissions - create new files 0 votes 2 answers 3k views How to call wp plugin REST functions without curl? 0 votes 1 answer 158 views wp3 custom post types rss 0 votes 1 answer 250 views https and wordpress breaks posts 0 votes 1 answer 931 views WP JSON API meta_query not working 0 votes 1 answer 133 views Custom Behavior when Adding New Custom Post Type in Dashboard 0 votes 2 answers 1k views wpdb and acf via wp rest api 0 votes 1 answer 742 views Get ACF fields in relationships of returned post 0 votes 1 answer 201 views photo gallery implementation like tmz 0 votes 2 answers 1k views sort post types by amount of views 0 votes 1 answer 2k views Does wp track views for posts? 0 votes 1 answer 133 views grab neighboring content in a query 0 votes 1 answer 485 views wpquery via ajax  
__label__pos
0.726372
HeavisideTheta HeavisideTheta[x] represents the Heaviside theta function , equal to 0 for and 1 for . HeavisideTheta[x1,x2,] represents the multidimensional Heaviside theta function, which is 1 only if all of the xi are positive. Details Examples open allclose all Basic Examples  (4) Evaluate numerically: Plot in one dimension: Plot in two dimensions: Differentiate to obtain DiracDelta: Scope  (36) Numerical Evaluation  (4) Evaluate numerically: HeavisideTheta always returns an exact result: Evaluate efficiently at high precision: HeavisideTheta threads elementwise over lists: Specific Values  (4) As a distribution, HeavisideTheta does not have a specific value at 0: Value at infinity: Evaluate for symbolic parameters: Find a value of x for which the HeavisideTheta[x]=1: Visualization  (4) Plot the HeavisideTheta function: Visualize shifted HeavisideTheta functions: Visualize the composition of HeavisideTheta with a periodic function: Plot HeavisideTheta in three dimensions: Function Properties  (9) Function domain of HeavisideTheta: It is restricted to real inputs: Function range of HeavisideTheta: HeavisideTheta has a jump discontinuity at the point : HeavisideTheta is not an analytic function: It has both singularities and discontinuities: HeavisideTheta is not injective: HeavisideTheta is not surjective: HeavisideTheta is non-negative on its domain: HeavisideTheta is neither convex nor concave: TraditionalForm typesetting: Differentiation  (4) Differentiate the univariate HeavisideTheta: Differentiate the multivariate HeavisideTheta: Differentiate a composition involving HeavisideTheta: Generate HeavisideTheta from an integral: Verify the integral via differentiation: Integration  (6) Indefinite integral: Integrate over finite domains: Integrate over infinite domains: Integrate the multivariate HeavisideTheta: Numerical integration: Integrate expressions containing symbolic derivatives of HeavisideTheta: Integral Transforms  (5) FourierTransform of HeavisideTheta: FourierSeries: Find the LaplaceTransform of HeavisideTheta: The convolution of HeavisideTheta with itself: The convolution of TemplateBox[{{x}}, HeavisideThetaSeq] with TemplateBox[{{{x, +, {1, /, 2}}}}, DiracDeltaSeq]-TemplateBox[{{{x, -, {1, /, 2}}}}, DiracDeltaSeq] is equal to TemplateBox[{{x}}, HeavisidePiSeq]:s Applications  (7) Solve the timeindependent Schrödinger equation with piecewise analytic potential: Use DSolve with DiracDelta source term to find Green's function: Solve the inhomogeneous ODE through convolution with Green's function: Compare with the direct result from DSolve: Model a uniform probability distribution: Calculate the probability distribution for the sum of two uniformly distributed variables: Plot the distributions for the sum: Fundamental solution (Green's function) of the 1D wave equation: Solution for a given source term: Plot the solution: Fundamental solution of the KleinGordon operator: Visualize the fundamental solution (it is nonvanishing only in the forward light cone): A cuspcontaining peakon solution of the CamassaHolm equation: Check the solution: Plot the solution: Differentiate and integrate a piecewise-defined function in a lossless manner: Differentiating and integrating recovers the original function: Using Piecewise does not recover the original function: Properties & Relations  (6) The derivative of HeavisideTheta is a distribution: The derivative of UnitStep is a piecewise function: Expand HeavisideTheta into HeavisideTheta with simpler arguments: Simplify expressions containing HeavisideTheta: Use in integrals: Use in Fourier transforms: Use in Laplace transforms: Possible Issues  (10) HeavisideTheta stays unevaluated for vanishing argument: PiecewiseExpand does not operate on HeavisideTheta because it is a distribution and not a piecewisedefined function: The precision of the output does not track the precision of the input: HeavisideTheta can stay unevaluated for numeric arguments: Machineprecision numericalization of HeavisideTheta can give wrong results: Use arbitraryprecision arithmetic to obtain the correct result: A larger setting for $MaxExtraPrecision will not avoid the N::meprec message because the result is exact: The functions UnitStep and HeavisideTheta are not mathematically equivalent: Products of distributions with coincident singular support cannot be defined (no Colombeau algebra interpretation): HeavisideTheta cannot be uniquely defined with complex arguments (no Sato hyperfunction interpretation): Numerical routines can have problems with discontinuous functions: Limit does not give HeavisideTheta as a limit of smooth functions: Neat Examples  (1) Form repeated convolution integrals starting with a product: Wolfram Research (2007), HeavisideTheta, Wolfram Language function, https://reference.wolfram.com/language/ref/HeavisideTheta.html. Text Wolfram Research (2007), HeavisideTheta, Wolfram Language function, https://reference.wolfram.com/language/ref/HeavisideTheta.html. CMS Wolfram Language. 2007. "HeavisideTheta." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/HeavisideTheta.html. APA Wolfram Language. (2007). HeavisideTheta. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/HeavisideTheta.html BibTeX @misc{reference.wolfram_2024_heavisidetheta, author="Wolfram Research", title="{HeavisideTheta}", year="2007", howpublished="\url{https://reference.wolfram.com/language/ref/HeavisideTheta.html}", note=[Accessed: 12-July-2024 ]} BibLaTeX @online{reference.wolfram_2024_heavisidetheta, organization={Wolfram Research}, title={HeavisideTheta}, year={2007}, url={https://reference.wolfram.com/language/ref/HeavisideTheta.html}, note=[Accessed: 12-July-2024 ]}
__label__pos
0.721003
Deploy in parallel multiple Azure virtual machines – PowerShell workflow script I have been working on various Azure  projects where it was necessary to deploy multiple virtual machines in parallel. Below is an example of using PowerShell workflow script to deploy group of virtual machine from CSV file into ARM: AzureVMBuild.ps1 # Input parameters param ( [Parameter(Mandatory=$false,Position=1)] [string]$csvFile=”.\AzureVMBuildInputs.csv”, [Parameter(Mandatory=$false,Position=3)] [string]$subscriptionId = ‘8d549018-802b-4924-a1bc-e274d6666644’, [Parameter(Mandatory=$false,Position=4)] [string]$tenantId = ‘d9cd498f-ae5c-4ab5-9a69-fe74fc666655’ ) # Main PS workflow # ——————– workflow Provision-AzureVM{ param ( [string]$vmName, # $vmName = ‘TESTVM’ [string]$location, # $location = ‘North Europe’ [string]$initVmSize, # $initVmSize = ‘Standard_D3_v2’ [string]$resourceGroupName, # $resourceGroupName = ‘RG-EUN-Workloads’ [string]$vNetName, # $vNetName = ‘VN-EUN’ [string]$vNetRGName, # $vNetRGName = ‘RG-EUN-Networking’ [string]$vmSubnet, # $vmSubnet = ‘FrontEnd’ [string]$storageAccountName, # $storageAccountName = ‘saeunstgrsapp’ [string]$storageContainer, # $storageContainer = ‘vhds’ [string]$disksConfig, # $disksConfig = ‘1\C\OS\127\127\4096;2\D\DATA\120\120\4096;’ [string]$computerName, # $computerName = ‘TESTVM’ [string]$osVersion, # $osVersion = ‘Windows Server 2012 R2’ [string]$adminUserName, # $adminUserName = ‘adm’ [string]$adminPassword, # $adminPassword = ‘P@$$w0rd1234’ [string]$asRGName, # $asRGName = ‘RG-EUN-Workloads’ [string]$asName, # $asName = ‘ASName01’ [string]$azuresubscriptionId, # $azuresubscriptionId = ‘8d549018-802b-4924-a1bc-e274d6666644’ [string]$azuretenantId, # $azuretenantId = ‘d9cd498f-ae5c-4ab5-9a69-fe74fc666655’ [PSCredential]$azureCreds # $azureCreds ) $vmName = $vmName.ToUpper() $computerName = $computerName.ToUpper() inlinescript { $ErrorActionPreference = ‘Stop’ # Login to Azure $Output = Login-AzureRmAccount -TenantId $Using:azuretenantId -SubscriptionId $Using:azuresubscriptionId -Credential $Using:azureCreds $Output = Select-AzureRmSubscription -TenantId $Using:azuretenantId -SubscriptionId $Using:azuresubscriptionId # Get Storage Configuration $storageAccount = Get-AzureRmStorageAccount -ResourceGroupName $Using:resourceGroupName -StorageAccountName $Using:storageAccountName write-output “$(Get-Date -Format “yyyy-MM-dd HH:mm:ss”) – $Using:vmName – Get storage account configuration” # Get Network Configuration $vNet = Get-AzureRmVirtualNetwork -Name $Using:vNetName -ResourceGroupName $Using:vNetRGName $subnetConfig = Get-AzureRmVirtualNetworkSubnetConfig -Name $Using:vmSubnet -VirtualNetwork $vNet write-output “$(Get-Date -Format “yyyy-MM-dd HH:mm:ss”) – $Using:vmName – Get network subnet configuration” # Set OS Disk name and location $osDiskName = $Using:vmName + “_disk_0” $osDiskUri = $StorageAccount.PrimaryEndpoints.Blob.ToString() + $Using:storageContainer + “/” + $osDiskName + “.vhd” write-output “$(Get-Date -Format “yyyy-MM-dd HH:mm:ss”) – $Using:vmName – Set OS Disk name and location” # Set network adapter name and configuration $nicname = “$($Using:vmName)_nic1” $nic1 = New-AzureRmNetworkInterface -Name $nicname -ResourceGroupName $Using:resourceGroupName -Location $Using:location -SubnetId $subnetConfig.Id -Force write-output “$(Get-Date -Format “yyyy-MM-dd HH:mm:ss”) – $Using:vmName – Set network adapter name and configuration” # Set OS version $Publisher = ‘MicrosoftWindowsServer’ $Offer = ‘WindowsServer’ $Version = ‘latest’ If ($Using:osVersion -eq “Windows Server 2012 R2”) { $Sku = ‘2012-R2-Datacenter’ } ElseIf ($Using:osVersion -eq “Windows Server 2012”) { $Sku = ‘2012-Datacenter’ } ElseIf ($Using:osVersion -eq “Windows Server 2008 R2”) { $Sku = ‘2008-R2-SP1’ } Else { Throw “Found error in setting OS version.” } write-output “$(Get-Date -Format “yyyy-MM-dd HH:mm:ss”) – $Using:vmName – Set operating system version” # Set user credentials $secpasswd = ConvertTo-SecureString $Using:adminPassword -AsPlainText -Force $adminCreds = New-Object System.Management.Automation.PSCredential ($Using:adminUserName, $secpasswd) write-output “$(Get-Date -Format “yyyy-MM-dd HH:mm:ss”) – $Using:vmName – Set administrator credentials” # Set virtual machine configuration If ($Using:asName -ne “”) { $availabilitySet = Get-AzureRmAvailabilitySet -ResourceGroupName $Using:asRGName -Name $Using:asName $vm = New-AzureRmVMConfig -VMName $Using:vmName -VMSize $Using:initVmSize -AvailabilitySetId $availabilitySet.Id } Else { $vm = New-AzureRmVMConfig -VMName $Using:vmName -VMSize $Using:initVmSize } $vm = Set-AzureRmVMOperatingSystem -Windows -ComputerName $Using:computerName -Credential $adminCreds -ProvisionVMAgent -EnableAutoUpdate -VM $vm -TimeZone “GMT Standard Time” $vm = Set-AzureRmVMSourceImage -PublisherName $Publisher -Offer $Offer -Skus $Sku -Version $Version -VM $vm $vm = Set-AzureRmVMOSDisk -Name $osDiskName -CreateOption fromImage -Caching ReadWrite -VhdUri $osDiskUri -VM $vm $vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic1.Id write-output “$(Get-Date -Format “yyyy-MM-dd HH:mm:ss”) – $Using:vmName – Set virtual machine configuration” # Build VM in Azure write-output “$(Get-Date -Format “yyyy-MM-dd HH:mm:ss”) – $Using:vmName – Build VM in Azure” $Output = New-AzureRmVM -ResourceGroupName $Using:resourceGroupName -Location $Using:location -VM $vm -DisableBginfoExtension write-output “$(Get-Date -Format “yyyy-MM-dd HH:mm:ss”) – $Using:vmName – Internal IP: $((Get-AzureRmNetworkInterface -Name $nicname -ResourceGroupName $Using:resourceGroupName).IpConfigurations[0].PrivateIpAddress)” # Set static IP address write-output “$(Get-Date -Format “yyyy-MM-dd HH:mm:ss”) – $Using:vmName – Set static IP Address” $nicConfig = Get-AzureRmNetworkInterface -Name $nicname -ResourceGroupName $Using:resourceGroupName $nicConfig.IpConfigurations[0].PrivateIpAllocationMethod = ‘Static’ $Output = Set-AzureRmNetworkInterface -NetworkInterface $nicConfig # Adding disks to VM #——————– write-output “$(Get-Date -Format “yyyy-MM-dd HH:mm:ss”) – $Using:vmName – Add DATA disks to VM” $vm = Get-AzureRmVM -ResourceGroupName $Using:resourceGroupName -VMName $Using:vmName $Disks = @() #LUN, diskNameSuffix, diskLabelSuffix, diskSize, caching (None/ReadOnly/ReadWrite) -> $Disks += ,@(0, ‘disk_1’, ‘disk_1’, 72, ‘ReadWrite’) $disksCfgs = $($Using:DisksConfig).Split(“;”) foreach ($diskCfgs in $disksCfgs) { $diskCfg = $diskCfgs.Split(“/”) $diskLabel = “disk_” + $($diskCfg[0] – 1).ToString() If ([int]$diskCfg[0] -ge 2) { $Disks += ,@($($diskCfg[0] – 1), $diskLabel, $diskLabel, $diskCfg[3], ‘None’) } } foreach ($Disk in $Disks) { $LUN = $Disk[0] $diskName = “$($Using:vmName)_$($Disk[1])” $diskLabel = “$($Using:vmName)_$($Disk[2])” $diskSize = $Disk[3] $Caching = $Disk[4] $DataDiskUri = $StorageAccount.PrimaryEndpoints.Blob.ToString() + “vhds/” + $diskName.ToLower() + “.vhd” $output = Add-AzureRmVMDataDisk -VM $vm -Name $diskLabel -DiskSizeInGB $diskSize -VhdUri $DataDiskUri -Caching $Caching -Lun $LUN -CreateOption empty } $output = Update-AzureRmVM -VM $vm -ResourceGroupName $Using:resourceGroupName } } $ErrorActionPreference = ‘Stop’ #region Main code #——————————————– write-output “Reading CSV file…” Start-Sleep -Seconds 3 # Read CSV File – Input parameters if (!(Test-Path $(split-path $csvFile))){ Throw “CSV file doesn’t exist in specified location – $csvFile.” } else { $CSVJobs = Import-CSV -Path $csvfile Write-Verbose “Successfully imported CSV File.” } # Logging to Azure write-output “Logging to Azure…” Start-Sleep -Seconds 2 $AzureCreds = Get-Credential # Start workflow $PSJobs = @() ForEach ($CSVJob in $CSVJobs) { $PSJobs += Provision-AzureVM $($CSVJob.vmName).Trim() $($CSVJob.location).Trim() $($CSVJob.initVmSize).Trim() ` $($CSVJob.resourceGroupName).Trim() $($CSVJob.vNetName).Trim() $($CSVJob.vNetRGName).Trim() $($CSVJob.vmSubnet).Trim() ` $($CSVJob.storageAccountName).Trim() $($CSVJob.storageContainer).Trim() $($CSVJob.disksConfig).Trim() $($CSVJob.computerName).Trim() $($CSVJob.osVersion).Trim() $($CSVJob.adminUserName).Trim() ` $($CSVJob.adminPassword).Trim() $($CSVJob.asRGName).Trim() $($CSVJob.asName).Trim() $subscriptionId.Trim() $tenantId.Trim() $AzureCreds ` -JobName $($CSVJob.vmName) -AsJob } # Get output of all jobs $PSJobs | Get-Job | Format-Table -AutoSize # Instructions for user write-output “IMPORTANT:`n 1. To check state of all job enter command: GET-JOB | FORMAT-TABLE -AUTOSIZE. 2. To get results of specific job enter command: RECEIVE-JOB – ID <ID_OF_JOB_TO_RECEIVE> 3. To suspend job enter command: SUSPEND-JOB -ID <ID_OF_JOB_TO_SUSPEND> 4. To resume suspended job enter command: RESUME-JOB -ID <ID_OF_JOB_TO_RESUME> 5. After all jobs complete enter command: GET-JOB | REMOVE-JOB” AzureVMBuildInputs.csv vmName,location,initVmSize,resourceGroupName,vNetName,vNetRGName,vmSubnet,storageAccountName,storageContainer,disksConfig,computerName,osVersion,adminUserName,adminPassword,asRGName,asName TESTVM,North Europe,Standard_D1_v2,RG-EUN-Applications,VN-EUN,RG-EUN-Networking,FrontEnd,saeunstlrsapp01,vhds,1/C/OS/40/40/4096;2/E/SQLData/466/466/4096;3/L/SQLLogs/233/233/4096;,TESTVM,Windows Server 2008 R2,adm,P@$$w0rd,, 5 thoughts on “Deploy in parallel multiple Azure virtual machines – PowerShell workflow script 1. Hello Bartosz, This is great. It is exactly what I am looking for. Is there a way to get the downloadable code and a sample CSV file. Also to use it, do I have to just import into Azure automation runbook? Thank you 2. [email protected] HI PS C:\Windows\system32> C:\hasan\AzureVMBuild1.ps1 Reading CSV file… Logging to Azure… cmdlet Get-Credential at command pipeline position 1 Supply values for the following parameters: C:\hasan\AzureVMBuild1.ps1 : You cannot call a method on a null-valued expression. + CategoryInfo : InvalidOperation: (:) [AzureVMBuild1.ps1], Runti meException + FullyQualifiedErrorId : InvokeMethodOnNull,AzureVMBuild1.ps1 PS C:\Windows\system32> $vmName TESTVM Please help me in getting this fixed. 3. I tried running the above script to build 4 Azure VM’s but, strange thing was three of them were built but I still get state failed as detailed below: Here is copy of CSV file: vmName,location,initVmSize,resourceGroupName,vNetName,vNetRGName,vmSubnet,storageAccountName,storageContainer,disksConfig,computerName,osVersion,adminUserName,adminPassword,asRGName,asName TESTVM1,North Europe,Standard_D1_v2,GS-RG-TEMP,GS-VNET1,GS-RG-VNET1,GS-VNET1-SUB3-DMZ,gssantemp,vhds,1\C\OS\127\127\4096,TESTVM1,Windows Server 2008 R2,Gurjit.Singh,P@$$w0rd1234,, TESTVM2,North Europe,Standard_D1_v2,GS-RG-TEMP,GS-VNET1,GS-RG-VNET1,GS-VNET1-SUB3-DMZ,gssantemp,vhds,1\C\OS\127\127\4096,TESTVM2,Windows Server 2008 R2,Gurjit.Singh,P@$$w0rd1234,, TESTVM3,North Europe,Standard_D1_v2,GS-RG-TEMP,GS-VNET1,GS-RG-VNET1,GS-VNET1-SUB3-DMZ,gssantemp,vhds,1\C\OS\127\127\4096,TESTVM3,Windows Server 2012 R2,Gurjit.Singh,P@$$w0rd1234,, TESTVM4,North Europe,Standard_D1_v2,GS-RG-PROD,GS-VNET1,GS-RG-VNET1,GS-VNET1-SUB1,gssanprod,vhds,1\C\OS\127\127\4096,2\D\DATA\10\10\4096,TESTVM4,Windows Server 2016,Gurjit.Singh,P@$$w0rd1234,, Also how many VM’s I can deploy with this script simultaneously ? Can I re-run the same script again or it will overwrite everything, I tried but nothing happened, I assume it will check if the VM exist and nothing with happen? Id Name PSJobTypeName State HasMoreData Location Command — —- ————- —– ———– ——– ——- 1 TESTVM1 PSWorkflowJob Failed False localhost Provision-AzureVM 3 TESTVM2 PSWorkflowJob Failed False localhost Provision-AzureVM 5 TESTVM3 PSWorkflowJob Failed False localhost Provision-AzureVM 7 TESTVM4 PSWorkflowJob Failed False localhost Provision-AzureVM 4. I have noticed you don’t monetize your website, don’t waste your traffic, you can earn additional bucks every month because you’ve got high quality content. If you want to know how to make extra money, search for: Mrdalekjd methods for $$$ Leave a Reply to Isaac Cancel reply Your email address will not be published. Required fields are marked * This site uses Akismet to reduce spam. Learn how your comment data is processed.
__label__pos
0.988433
ChrisPasa ChrisPasa - 1 year ago 98 JSON Question How do I LINQ nested lists to JSON in Web Api 2? I am creating a web api that returns Json. My out put looks like this: [ { "NodeID":1252, "CASNumber":"1333-86-4", "EINECSCode":"215-609-9", "EUIndex":"215-609-9", "Duty":"No", "Prohibited":"No", "Unwanted":"No", "IsReach":"No", "SubstanceName":"Carbon black", "GroupName":"Renault Complete", "Portion":0.100000 }, { "NodeID":1252, "CASNumber":"1333-86-4", "EINECSCode":"215-609-9", "EUIndex":"215-609-9", "Duty":"No", "Prohibited":"No", "Unwanted":"No", "IsReach":"No", "SubstanceName":"Carbon black", "GroupName":"Renault Orange", "Portion":0.100000 } ] I Am trying to create nested Json with an ouptut like this: { "NodeID":1252, "CASNumber":"1333-86-4", "EINECSCode":"215-609-9", "EUIndex":"215-609-9", "Duty":"No", "Prohibited":"No", "Unwanted":"No", "IsReach":"No", "SubstanceName":"Carbon black", "GroupName":[ { "name":"Renault Complete" }, { "name":"Renault Orange" } ], "Portion":0.100000 } This is my class: public class BasicSubstances { public int NodeID { get; set; } public string CASNumber { get; set; } public string EINECSCode { get; set; } public string EUIndex { get; set; } public string Duty { get; set; } public string Prohibited { get; set; } public string Unwanted { get; set; } public string IsReach { get; set; } public string SubstanceName { get; set; } public string GroupName { get; set; } public decimal ?Portion { get; set; } } And this is my controller: public List<BasicSubstances> GetBasicSubstance(string partNumber, string version, int nodeID, int parentID ) { IMDSDataContext dc = new IMDSDataContext(); List<BasicSubstances> results = new List<BasicSubstances>(); foreach (spGetBasicSubstanceResult part in dc.spGetBasicSubstance( partNumber, version, nodeID, parentID)) { results.Add(new BasicSubstances() { NodeID = part.NodeID, CASNumber = part.CASNumber, EINECSCode = part.EINECSCode, EUIndex = part.EINECSCode, Duty = part.Duty, Prohibited = part.Prohibited, Unwanted = part.Unwanted, IsReach = part.ISReach, SubstanceName = part.SynonymName, GroupName = part.GroupName, Portion = part.Portion }); } return results; } My Json looks like the first output, I need it to look like the second output. I totally lost, any help would be appreciated. Answer Source Well, you can try the following: Your models: public class BasicSubstanceViewModel { public int NodeID { get; set; } public string CASNumber { get; set; } public string EINECSCode { get; set; } public string EUIndex { get; set; } public string Duty { get; set; } public string Prohibited { get; set; } public string Unwanted { get; set; } public string IsReach { get; set; } public string SubstanceName { get; set; } public List<GroupName> GroupName { get; set; } public decimal ?Portion { get; set; } } public class GroupName { public string Name { get; set; } } Your method: public BasicSubstanceViewModel GetBasicSubstance(string partNumber, string version, int nodeID, int parentID ) { IMDSDataContext dc = new IMDSDataContext(); var spResult = dc.spGetBasicSubstance( partNumber, version, nodeID, parentID).ToList(); if(!spResult.Any()) { return null; } var firstPart = spResult[0]; var result = new BasicSubstanceViewModel { NodeID = firstPart.NodeID, CASNumber = firstPart.CASNumber, EINECSCode = firstPart.EINECSCode, EUIndex = firstPart.EINECSCode, Duty = firstPart.Duty, Prohibited = firstPart.Prohibited, Unwanted = firstPart.Unwanted, IsReach = firstPart.ISReach, SubstanceName = firstPart.SynonymName, GroupName = spResult.Select(p => new GroupName { Name = p.GroupName }).ToList(), Portion = firstPart.Portion }; return result; } Hope it will help.
__label__pos
0.99993
Skip to main content Imagine you’re a treasure hunter, but instead of a jungle or set of ruins, you have the vast expanse of the internet to navigate. But you have a tool that leads you to the gold and provides insights into how other hunters are faring. This invaluable compass is a SERP analyzer. Introduction to SERP Analyzers Navigating the twists and turns of search engine optimization (SEO) can be like setting sail through uncharted waters. Considering the relentless competition we face today, having an edge can make all the difference. One such advantage comes from understanding not just your own strategies but also what others are doing. And this is where a SERP analyzer becomes indispensable. What are SERP Analyzers, and why are they important for SEO? what are serp analyzers At its core, a SERP analyzer reviews Search Engine Results Pages (SERPs) to allow you to peer beneath the surface of basic rankings. By deploying this tool, you can see which websites are outranking yours and why. This level of insight offers more than vanity metrics. It shines a spotlight on actionable data that can boost your SEO tactics. Understanding nuances such as keyword emphasis and snippets showcased by top-ranking pages allows for tailored content strategies. Doing so refines your website’s relevance and authority in ways competitors might miss. How do SERP analyzers help review search engine results pages? The magic lies in the tool’s capacity to process extensive datasets swiftly, offering detailed insights that would take enormous human effort otherwise. From keyword density to backlink profiles or URL structure to page performance, each factor influencing SERP is scrutinized systematically. With these tools, conducting free SERP analysis becomes an efficient task that guides users on positioning web pages for optimal visibility. Features of SERP Analyzer Features of SERP Analyzer With an array of robust features, these SERP checker tools serve as indispensable instruments for any SEO professional or ambitious digital marketer. Detailed analysis of organic search results Think of a typical searcher scrolling through Google’s pages. What makes them click on one result over another? Understanding this behavior is crucial and is where detailed SERP analysis shines. By dissecting page after page of organic search results, SERP metrics reveal valuable information such as: • Keyword frequency and distribution within top-ranking content. • The prevalence of meta tags that resonate with end-users. • The structuring styles of URLs that climb to pole position. This granular deconstruction helps you replicate what works best in your niche and tailor your content accordingly. Access to local search data for various locations Say you run a bakery in Boston. It wouldn’t do much good to rank favorably for pastry lovers across the pond in London. That’s why local search data is worth its weight in gold for businesses targeting specific geographies. SERP position tracker capabilities allow us to peek under the hood at: • How certain keywords perform across different locales. • Which types of businesses dominate local pack listings. • Insights into behavior trends among local and global audiences. By analyzing divergent geographic datasets, you’re more equipped than ever to bake (or rank) locally while thinking globally. Ranking difficulty evaluation for targeted keywords Imagine trying to find a perfect spot on an overcrowded beach. It’s tough! A similar challenge exists when trying to rank for highly competitive keywords. SERP analyzers assess ranking difficulty by measuring: 1. The domain authority and backlink profiles of competing sites. 2. Content quality scores among existing top performers. 3. Search intent alignment between query and prevailing results. Arming yourself with this knowledge empowers you to strategically choose battles where victory is attainable versus facing steep uphill climbs elsewhere. SERP position history tracking for competitor analysis Keeping tabs on rivals’ past maneuvers can predict their future actions. By monitoring competitors’ SERP position history, we glean insights into: • Their periods of ranking volatility, which can indicate successful or failed SEO strategies. • How seasonal trends affect their performance throughout the year. • Historical benchmarks that allow us to set realistic targets for our campaigns. Tracking these historical breadcrumbs leads us closer to emulating and surpassing our competitors. Integration with other SEO tools for comprehensive analysis One tool alone might not hold all the answers. This is why integrating multiple platforms conjures deeper insights. Think about combining SERP checker tools with analytics dashboards or CRM systems. Such a fusion facilitates: • A broader interpretation that combines user experience data from analytics with ranking data from SERP checkers. • Enhanced keyword intelligence using both keyword planning tools alongside real-time rank tracking functionalities. • Streamlined workflows where team members switch seamlessly from one facet of analysis to another, all under one integrated ecosystem. Like gears interlocking within an intricate timepiece, integrated tools synchronize harmoniously to drive efficiency in your SEO endeavors and ensure no stone remains unturned. Using SERP Analyzers for Keyword Research Embarking on keyword research can sometimes feel like navigating a dense forest. The path isn’t always clear, and getting lost amidst the thickets is easy. However, with the compass that is a quality SERP analyzer, you can cut through the complexity and discover opportunities to propel your SEO efforts forward. How to conduct keyword research using the tool Conducting keyword research with a keyword SERP tool involves several crucial steps. Let me walk you through effective keyword discovery: 1. Define Your Goals: Before jumping in, clarify what you aim to achieve with your SEO strategy—are you looking for more traffic, higher engagement, or improved sales conversions? 2. Input Seed Keywords: Enter seed keywords related to your niche or industry into the tool. 3. Analyze SERP Data: Examine the search engine results page (SERP) data provided, paying close attention to key metrics like search volume and keyword difficulty. 4. Expand Keyword List: Utilize features within the tool to generate longer lists of related keywords, synonyms, and long-tail phrases. Each could be a potential ace up your sleeve. 5. Assess Relevance and Intent: Ascertain whether these keywords align with user intent and if they are genuinely pertinent to your content offerings or services. Remember that throughout this process, staying organized will save you from countless headaches down the line. Categorize your findings systematically to sift through them later during content planning easily. Finding low-competition keywords with high search volume For many budding SEO adventurers, striking a delicate balance between attainability and impact when choosing keywords is a top priority. Here’s how to uncover low-competition keywords buoyed by sizable search volumes: • Use filters within your free SERP tracker to sort by “Competition” (or a similar metric), then look at those marked as “Low” difficulty. • Check search volume data points. Aim for words or phrases with respectable numbers. Too little traffic won’t move the needle on your SEO gauges. • Consider seasonality and trends. Some low-competition terms might become highly valuable during specific times of the year or amidst emerging trends. Pro Tip: Don’t overlook the long tails! These lengthy variations tend to have lower competition and often signal higher purchase intent from searchers. Uncovering keyword opportunities based on SERP analysis Peering deeply into SERP analysis can reveal riveting narratives about what users are seeking and which queries remain underserved. This is prime ground for establishing quick wins in rankings. To do so, you should: • Analyze Top Pages: Review pages that rank well for your targeted terms. Identify patterns like recurring topics or formats that indicate successful strategies. • Investigate Featured Snippets: Peek at query spaces where featured snippets appear.  • Explore Related Searches: Look for topics adjacent to mainline queries through related searches offered by some tools to find untapped veins with rich SEO prospects. Always keep the context in mind. The ideal term must fit harmoniously within both user expectations and your brand narrative to optimally connect needs with solutions. Analyzing Competitors with SERP Analyzers As you navigate the complex ocean of search engine optimization, one tool that should be a staple in your toolbox is a quality SERP analyzer. In particular, let’s dive into how Ahrefs’ SERP features can be wielded to gain an edge over your competition. Identifying competitor strengths and weaknesses in organic search results Imagine stepping into a ring where every fighter’s strengths and weaknesses are known to you ahead of time. That’s the advantage an effective SERP analysis offers when understanding your competitors. Here’s what you can do: • Examine their top-ranking pages: With Ahrefs’ SERP checkup, note which of your competitors’ pages rank highly for your desired keywords. Is it blog content, product pages, or educational resources? This insight lets you see where they meet users’ needs better than you. • Analyze their on-page SEO efforts: How well have they optimized their titles and descriptions? Do they use enticing meta tags that increase click-through rates (CTR)? You may discover opportunities to create more compelling metadata. • Assess content quality and structure: Higher rankings often correlate with high-quality content. Look at how thoroughly their content covers the topic compared to yours. Could their skyscraper approach be trumping your shorter posts? Through such robust evaluations, pinpointing where competitors excel can help guide strategic improvements in your SEO campaign. Evaluating competitor ranking positions and visibility Scrutinizing the SERP ranking report through tools like Ahrefs affords a clear window into where each player stands in the market hierarchy for chosen keywords: 1. Identify ranking trends: When looking at competitor rankings, observe static positions and patterns over time. Are certain competitors consistently climbing while others fluctuate? 2. Cross-reference with keyword difficulty: Are those high rankings achieved in highly competitive terrains, or are they dominating niche segments? Understanding this balance gives crucial context to their success. 3. Break down by keyword groups: Sometimes, clusters of related terms provide more insight into strategic focuses than individual keywords. This evaluation puts rankings and broader visibility strategies under microscopic scrutiny, which is critical for sailing to clearer SEO horizons. Monitoring competitor strategies through historical data The ripples of past tactics can inform future maneuvers. Because of this, studying historical data is pivotal. Doing this requires you to: • Track changes over time: By leveraging Ahrefs’ capability to pore over archival snapshots, witness how today’s front-runners emerged from yesterday’s scrum. What shifts occurred in heading tags or page structures? • Spot link-building campaigns: Backlinks tremendously sway domain authority. Observing spikes in backlink acquisition might signal intensified link-building efforts worth emulating or expose gaps in those attempts ripe for exploitation on your part. By effectively peering through this rearview mirror offered by comprehensive historical data analysis, one can better forecast coming changes and adapt nimbly to keep pace with or even overtake competitors on these digital racetracks. Implementing analytical tactics requires precision and savvy interpretation, which is as much art as science. Keep close tabs on these metrics using Ahrefs’ rigorous integrations for monitoring SERPs and sculpt a strategy refined by insights into your rivals’ history in search landscapes. Optimizing Local SEO with SERP Analyzer When approaching Search Engine Optimization (SEO), harnessing every tool at your disposal is essential for climbing the daunting mountain of search rankings.  Leveraging Regional Search Data to Improve Local SEO Efforts Your journey begins with comprehending the lay of the land and understanding how local search results differ from broader searches. Here, “SERP location” isn’t just a keyword; it’s the gateway to tailoring content that resonates with an audience pinned down to specific geographical areas. Follow these steps to improve your local SEO: • Begin by Mapping Out Your Territory: Utilize an SERP Analyzer to gain insight into what locals are searching for and which queries trigger location-based results. • Know Thy Neighbor: Scrutinize hyper-local keywords and phrases common in the area on which you’re focusing. These may vary significantly from one locale to another. • Cultivate Content That Speaks Locally: Craft pages and posts integrating local lingo, landmarks, events, or issues that stir up community interest, guided by what your analyzer reveals. By employing these tactics through your SERP Analyzer’s data on SERP location trends, you’re not just optimizing SEO but speaking directly to a community eager for relevance. Analyzing Local Search Results for Targeted Locations Next up on this venture is a thorough analysis of search results within targeted locales. What appear as mere listings on a page are actually nuggets of gold waiting to be panned. To succeed, you will need to: 1. Zoom in on Nuances: Each city has its subtleties. A phrase popular in one might be non-existent in another. Catch these nuances early by examining variations across different locations. 2. Study Successful Contenders: Look at who already ranks well locally and understand why they appeal to hometown crowds. This could be due to their localized keyword usage or community engagement strategies. 3. Keep an Eye on Seasonal Trends: Different regions have seasonal quirks affecting searches, like tourism highs or festival periods, when specific terms might spike. Analyzing SERP locations allows you to uncover patterns that work best within each targeted region.  Finding Opportunities to Outrank Competitors in Local Searches The final step in our exploration requires a strategy to pinpoint chinks in competitors’ armor that you can exploit. This is how you can approach it: • Seek out Gaps: Find service areas or products that are not adequately covered by competitors but are still sought after by locals. • Smart Positioning: Is there room at local events or forums where you could shine brighter than rivals? Use your SERP Analyzer’s insights for strategic positioning. • Link Up Locally: Cultivate relationships with nearby businesses for mutual backlink opportunities, enhancing each other’s standing in local searches. Outranking the competition isn’t about going toe-to-toe on every front. Instead, utilize untapped SERP location niches they’ve overlooked or can’t fully commit to. Advanced Metrics and Insights with SERP Analyzers Advanced Metrics and Insights with SERP Analyzers Exploring advanced SEO metrics available in the tool A SERP Analyzer is akin to a compass in the hands of a navigator. Here’s why I get excited talking about it: this tool isn’t just about rankings. It uncovers a wealth of advanced SERP metrics that can transform your strategy, such as: • User Experience Signals: This includes click-through rate (CTR), dwell time, and bounce rate, which indicate how users interact with your site. • Backlink Profile Analysis: Understand the strength and quality of your backlink profile, an essential facet of domain authority. • Content Relevance Score: See how well your content aligns with user intent, a critical factor in satisfying users and algorithms. • Mobile Readiness: With mobile traffic dominating, this metric assesses how friendly your site is to mobile users. Tapping into these rich veins of data enables you to make informed decisions that push you ahead in the SEO race.  Gaining insights into SERP features and their impact on CTR SERP features are like bright billboards along the highway: they catch the eye and significantly influence traveler behavior. And online, these translate into elements such as featured snippets, local packs, or knowledge graphs. Examining these through a nifty SERP Analyzer reveals their profound effect on CTRs because they often provide information on the results page, encouraging clicks by inciting curiosity or satisfying queries then and there: • A site listed in a local pack might notice higher CTR due to prominent displays and geographic relevance. • Having a page summarized as a featured snippet could either increase clicks if readers want more than a teaser or reduce them if their query has been answered outright. Understanding these nuances is crucial, and leveraging them effectively enhances visibility exponentially. Monitoring rich snippets and structured data for enhanced visibility Rich snippets’ additional bits of descriptive text, ratings, or images can set your listing apart from standard ones. They’re powered by structured data embedded in your website’s code.  Monitoring these elements using SERP Analyzers allows us to pull back the curtain on opportunities for enhanced visibility by: 1. Identifying pages where adding schema markup could lead to rich snippets. 2. Analyzing competitor pages that already utilize rich snippets successfully. 3. Investigating potential reasons for markup not being displayed despite proper implementation. This monitoring equips you with actionable insights to guide adjustments tactically. Tips and Best Practices for Effective SERP Analysis Tips and Best Practices for Effective SERP Analysis Embarking on a SERP analysis journey can be akin to mining for gold in the vast landscape of search engine data. With the right strategies and tools, you can uncover insights that propel your website to new heights. How to Interpret and Utilize the Data Provided by SERP Analyzer Effectively Think of a SERP analyzer as a high-powered telescope looking into the universe of Google’s search pages. To interpret this cosmos of information effectively, consider these steps: • Understand the Metrics: Know what each metric stands for, whether it’s click-through rates (CTR), organic positions, or featured snippets. Recognizing how they affect your site gives you a clearer picture. • Use Historical Data: Look at past data trends, which often predict future patterns and show where you might capitalize. • Assess User Intent: Try stepping into your audience’s shoes. What are they seeking? Analyzing user intent behind queries helps tailor your content effectively. • Spot SERP Features: Identify if special features like local packs or knowledge panels dominate the results page. These clues tell you what kind of content Google favors for particular queries. Interpreting data provided by a SERP analyzer is not just about numbers. You’ll also need to craft stories that resonate with readers and algorithms. Make sure to translate these insights into actionable tasks, such as optimizing on-page elements or creating richer content strategies. Best Practices for Conducting Thorough Competitor Analysis Using the Tool When diving into competitor analysis, employing methodical approaches ensures comprehensive insights: 1. Identify Key Rivals: • Start by pinpointing who truly competes in your digital space; a simple “who ranks where” approach won’t cut it. • Dive deep into domain comparisons to create a broader image of their SEO efforts. 2. Track Ranking Shifts: • Keep an eye on fluctuations in competitor rankings over time. Stability can be just as telling as volatility. 3. Examine Backlink Profiles: • Evaluate not just quantity but quality, too. The caliber of backlinks can support authority more than sheer numbers. 4. Study Content Quality and Relevance: • Scrutinize your rivals’ top-performing pieces. What makes them tick? Are there gaps you could fill? 5. Monitor On-page Optimization: • Assess title tags, headers, and meta descriptions. Minor tweaks can sometimes lead to significant leaps on the results page. Effective competitor analysis involves shadowing adversaries and learning from them to turn observed tactics into innovative practices that surpass standard benchmarks. How to do SERP analysis is a layered question. It requires technical understanding and creative ingenuity when interpreting data or evaluating competitors. Use these guidelines to navigate through vast oceans of information. By harnessing best practices robustly, one can convert analytics into actions that brighten the beacon of their online presence amidst competitive tides. Increasing Search Engine Visibility with SERP Analyzer When you embark on the voyage to amplify your website’s visibility, using a SERP Analyzer can be akin to navigating the high seas with an expertly crafted map. This map – your SERP analysis – is brimming with clues about the online terrain dominated by the competition. Peering through the lens of an SERP Analyzer illuminates your path, allowing you to make data-informed decisions that steer your SEO efforts in the right direction. Applying the insights gained from SERP analysis to improve website visibility Imagine standing at a vantage point where you can see how your competitors scale their ranks on search engines and replicate their most successful tactics. A robust SERP analysis places this panoramic perspective at your fingertips. 1. Identify High-Performing Content: Dissect top-ranking pages not just for keywords but also for their content structure, length, visuals, and tone. 2. Understand User Intent: Analyze which content formats captivate your audience. 3. Monitoring Backlinks: Observing who links back to competitor sites can uncover opportunities for networking and link-building strategies. These actions empower you to fine-tune meta descriptions, headings, and page titles that align more closely with what users and search engines are seeking. Optimizing content strategies based on competitor analysis and keyword research As you deepen your understanding of SERP competition, it’s time to mold those findings into a stellar content strategy. The interplay between analyzing competitors’ tactics and diving into keyword research sets the stage for SEO success: • Craft Content That Counts: Write authoritative pieces that satisfy search queries better than existing resources. • Optimize For The Right Keywords: Integrate long-tail keywords naturally into your content. These gems often have lower competition yet attract highly targeted traffic. • Break New Ground: Uncover gaps in competitors’ content strategies and fill these with fresh takes or comprehensive guides on overlooked topics. Optimizing your strategy involves adapting and innovating based on what the data suggests to capture attention and clicks within search engine results pages (SERPs). The blend of analyzing SERP competition through a capable analyzer and skillfully adapting both onsite elements and off-page factors epitomizes tactical advancement in SEO. Following these steps isn’t just about keeping pace. It’s about setting new benchmarks for others in pursuit of digital excellence. Selecting the Right SERP Analyzer Tool With the digital landscape teeming with an array of SEO tools, picking the most effective one can be daunting. This holds especially true when you pursue a specialized instrument such as an SERP analyzer. The following subsections will guide you through pivotal factors and comparisons that should inform your selection process. Factors to consider when choosing an SERP Analyzer tool Imagine stepping into an orchard ripe with fruit; each variety promises its own flavor and nutritional value. Similarly, SERP analyzers each possess their unique features and benefits. Here’s what you’ll need to consider when finding the perfect pick from the orchard: • Accuracy: Central to any Google SERP checker is accuracy. Ensure that the tool consistently offers precise data on search engine positions. • Real-time Data: You want insights hot off the press, or in this case, freshly updated rankings directly from Google’s dynamic results pages. • Mobile Compatibility: Much of today’s internet traffic hails from mobile devices. As a result, a mobile SERP checker keeps you not only informed but also relevant. • Ease of Use: It goes without saying that user experience must be seamless. Complex functions are great, but not if they send you down a rabbit hole of perplexity. • Keyword Diversity: Your archetypal online SERP checker should grant access to long-tail and short-tail keyword analysis for a well-rounded strategy. • Competitor Analysis Features: Analyzing competitor’s grants perspective! Tools offering granular insight into competitors’ performance can be invaluable. Before committing to a purchase decision, consider these points carefully!  Comparison of popular SERP Analyzer tools in the market Wouldn’t it be splendid to taste-test every apple before deciding which bag to fill? In the world of SEO tools, we have resources providing comprehensive reviews and head-to-head matchups. Let me walk you through some well-known contenders: Ahrefs: ahrefs serp analyzer • A powerhouse suite whose Site Explorer doubles as an advanced online SERP checker. • Offers detailed organic search reports alongside lost and new keyword tracking. • Some may find it more costly than others with similar offerings. SEMrush: semrush homepage • Often praised for its vast database and depth in competitor research capabilities. • Incorporates social media tracking along with traditional SERP metrics. • It can feel overwhelming for beginners due to its robust feature set. Moz Pro: Moz pro homepage • Renowned for user-friendly interfaces coupled with powerful Local Market Analytics, making local SERP review easier. • Provides exemplary customer support often lauded by users new to SEO instruments. SEOwind: seowind homepage • SERP analyzer for content writing, • In-depth analysis of top-performing pages in terms of keywords and content structure. When considering how to check SERP ranking effectively using one of these tools, consider their specialized capabilities. Final thoughts and next steps for utilizing SERP Analyzer effectively Stepping into the arena of SEO can sometimes feel like embarking on a hike through uncharted wilderness. Every path you take is a mix of educated guesses and strategic choices. But with a top-notch SERP Analyzer, navigating this terrain becomes less daunting. Your journey towards SEO success is one in which each step is informed by insights gleaned from a comprehensive analysis of search engine results pages (SERPs). Embarking on this trek requires more than just obtaining the right tool. You’ll also need to use that solution adeptly. Here are some actionable suggestions to ensure that your use of a SERP Analyzer isn’t merely a one-time affair but part of an ongoing strategy: • Keep Analyzing Regularly: The digital landscape is ever-changing. What works today might not be as effective tomorrow. Make SERP analysis a regular fixture in your SEO routine. • Be Agile with Your Strategies: As you uncover trends through the SERP Analyzer, adapt your strategies accordingly. This flexibility can make all the difference between being seen or sliding into obscurity. • Involve Your Team: Share insights with content creators, marketers, and developers so everyone can work together to achieve better visibility and higher rankings. Use what you’ve learned here to optimize existing content, inspire new material, and outmaneuver competitors at every twist and turn. With patience and persistence, strategically deploying your SERP Analyzer will illuminate patterns in user behavior, content consumption preferences, and market gaps that are ripe for your unique input. Ultimately, view this technology not as a mere tactical device but as part of a holistic approach to mastering search engines. By embracing these final thoughts and gearing up for continual learning and optimization using your chosen SERP Analyzer tool, you’re not just ticking boxes; you’re crafting an enduring story where each chapter spells greater success in connecting with your audience through search engines. Welcome to a tale of triumph in SEO! Kate Kandefer Entrepreneur passionate about scaling SaaS companies on a global B2B stage. My expertise in AI, SEO, and Content Marketing is my toolkit for driving tangible results. I'm a hands-on executor guided by results, deeply passionate about marketing, and skilled at aligning business objectives with people's needs and motivations. With a pragmatic mindset. My approach is all about clarity, efficiency, and open dialogue.
__label__pos
0.832016
0 $\begingroup$ Context: I'm writing an essay on the subject of hydrodynamics which deals with numerical methods to solve a system of partial differential equations (shallow water equations or Saint-Venant equations). These can be written as a system of two equations or, alternatively, in a single line using vector notation: \begin{equation} \frac{\partial U}{\partial t}+\frac{\partial F}{\partial U}\frac{\partial U}{\partial x} = S(U) \end{equation} Where $F$ and $U$ are two vector functions with two components each. It's not important for the scope of this question to know what they represent physically, the important part is that both $F$ and $U$ are functions of $x$ and $t$ ($F=F(x,t)$, $U=U(x,t)$), but $F$ can be written as well as a function of the components of $U$ ($F=F(U)$). I want to write the system in an even more compact way using the Jacobian: \begin{equation} \frac{\partial U}{\partial t}+J\frac{\partial U}{\partial x} = S(U) \end{equation} Where $J$ is the Jacobian matrix of the vector function $F$ not with respect to the independent variables $(x,t)$, but with respect to the components of $U$: i.e., $J$ is the matrix of all the partial derivatives of the components of $F$ with respect to $U$: \begin{equation} J =\frac{\partial F}{\partial U} = \begin{vmatrix}\frac{\partial F_1}{\partial U_1} && \frac{\partial F_1}{\partial U_2} \\ \frac{\partial F_2}{\partial U_1} && \frac{\partial F_2 }{\partial U_2}\end{vmatrix} \end{equation} Which is the correct notation I should use to write it? $J(F)$? $J(F(U))$? $J_U(F)$? I'm really confused because the Jacobian matrix is usually calculated using $(x,y,z)$ as independent variables, but this time I want to signal that I'm deriving with respect to another set of variables! $\endgroup$ 1 • $\begingroup$ I like just $\frac{\partial F}{\partial U}$. It’s the most informative notation. $\endgroup$ – Deane Feb 17, 2021 at 16:01 1 Answer 1 2 $\begingroup$ I usually write $J_F (U)$ Inside the brakets I always put the variables we are deriving with respect to. I put the F down below because it is the function I am deriving. Hope this helps! $\endgroup$ You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
__label__pos
0.99967
Hot answers tagged 12 Actually, if the RSA key generation is malicious, there are even more subtle ways that can someone can leak the key. The cleverest way I've seen works like this (assuming that we're generating an RSA-1024 key; for RSA-2048, we just use a larger curve): The attacker generates an EC public/private key pair; using a 192 bit curve for RSA-1024 is good. He ... 9 Hash functions must be public, so if you want use RSA as hash function you should fix $K$. Now let $n$ be the RSA module and $H$ denote RSA hash function. We have $$H(M)=H(M+n)$$ so this function is not second preimage and collision resistant. Also this system is not first preimage resistant (with known public and private key): Let $M^k=h \pmod n$ ... 6 If performing as in a typical description of textbook RSA, Bob has: secretly drawn distinct primes $p=83$ and $q=103$; computed $n=p\cdot q=8549$; computed $\varphi(n)=(p-1)\cdot(q-1)=8364$ where $\varphi(n)$ is Euler's totient function; chosen $e=445$, as some arbitrary $e>1$ with $\gcd(e,\varphi(n))=1$; published his public key $(n,e)\;$, which in ... 5 The field of cryptography that you are looking for is called Kleptography. In kleptography, we are dealing with a setting where the device performing your cryptographic tasks is potentially malicious. Now this device tries to leak information to some attacker that allows this attacker to break the used cryptographic scheme. If I am not mistaken that scheme ... 5 One way to approach this problem is to first look at the simpler problem of that cardinality of $x^e \bmod p$ where $p$ is prime, and $gcd(p-1, e)$ might not be 1. In that case, we have two cases: $x \equiv 0 \pmod{p}$, which makes it obvious that 0 will always be a possible postimage, and $x \in \mathbb{Z}^*/p$; in this second case, that group is ... 3 That doesn't hide Bob's identity from eavesdroppers. (The OP mentioned in chat that the OP isn't trying to do that.) I can no longer spot any other problems with the key exchange part. The encryption/decryption of application level data is vulnerable to arbitrary replays and reflection and dropping. ​ The public MAC input should indicate direction and ... 3 With an alphabet of 27 characters, the maximum number of characters that can fit into one plaintext block is equal to the largest integer less than or equal to $\text{log}(n) / \text{log}(27)$: $$\bigg\lfloor\frac{\text{log}(59768553302699443)}{\text{log(27)}}\bigg\rfloor = 11$$ On the other hand, assuming the ciphertext is represented as a string of ... 2 Which cryptographic algorithm would they want to use? That will really depend on the situation. To select the algorithms one should ask himself (at least) the following questions: Which standards do you trust? Which standard do you have to use? What computations can you afford? Symmetric, or public key based? It again depends on the situation. ... 2 Is javascript RSA signing safe? …is safe or can people forge… In contrast to the accepted answer, I would not call it “safe” from a cryptographic point of view and I would definitely not say that “ if you take good care of securing your environment where you run the JS code you will be OK. ” because the sad fact is: that’s not enough to ensure ... 2 What you seem to be looking for is a scheme like the following: It consists of two algorithms, a key generation algorithm $K$ and a "key use" algorithm $U$. The key generation algorithm outputs a pair of keys $(k_0, k_1)$. The "key use" algorithm takes as input a key and an element from some set $S$ (which may depend on $k_0$ and $k_1$), and outputs an ... 2 1a lHash: What is this and where does this come from? That's in PKCS#1 v2.1: RSAES-OAEP-ENCRYPT ((n, e), M, L): 2a. If the label L is not provided, let L be the empty string. Let lHash = Hash(L), an octet string of length hLen (see the note below). 1b) PS: A string of zeros, but how much zeros? Similarly for this question, lets ... 2 Theorem: Let $gcd(a,n)=1$ and $\phi(n)$ be Euler's totient function, then $a^{\phi(n)} \pmod n=1$. One of this famous methods for encrypting is RSA. In this method we use above theorem. Let $e,n$ are public and $\phi(n) , d$ are private such that $e\cdot d =1\pmod {\phi(n)}$$(e\cdot d=1-t\cdot \phi(n))$. For encryption we have: ... 2 With such a small block size there is no way to employ RSA padding modes such as PKCS#1 v1.5 padding or OAEP. You could however see the encryption as ECB mode encryption. In that case you could apply padding mechanisms that have been constructed for symmetric block ciphers. Those padding modes however have been defined for bytes rather than characters. ... 2 In real word RSA modules are so large that probability for finding $c_1$ which is not coprime with $n$ is approximately zero. Also if you founded such number then $p=gcd(c_1,n)\neq1$ so $p$ is a factor of $n$ and in this case attack is not necessary because $n$ is factored. $gcd(296,1073)=37\neq 1$ so $p=37,q=\frac{1073}{37}=29$ and $\phi(n)=1008$ Now ... 2 You need an authentic channel from Alice to Bob to get a secret channel from Bob to Alice. This assumption is missing in a), so anyone in control of the communication channel can play man in the middle on any protocol. As long you don't have a secret channel from Alice to Bob or an authentic channel from Bob to Alice, Alice will never (= for any protocol) ... 2 what is the range for exponent e? Actually, there is no required upper bound for $e$ (except that some implementations may reject ridiculously large values). The math behind RSA states that any $e$ that is relatively prime to both $p-1$ and $q-1$ will work, no matter how large it is. There might not appear to be a need for an $e > lcm(p-1, q-1)$ (as ... 2 RSA modules factoring are not hard in general case. In special cases we can factor numbers easily. One of these special cases is weak prime number, if at least one of two RSA modules primes is weak we can factor it easily. It is interesting that number of such $1024$ bit modules are at least $2^{750}$ and for $2048$ bit is $2^{1500}$. Your mentioned RSA ... 1 Well this depends on your key size. But, in general: A 1024-bit RSA key using OAEP padding can encrypt up to (1024/8) – 42 = 128 – 42 = 86 bytes. A 2048-bit key can encrypt up to (2048/8) – 42 = 256 – 42 = 214 bytes. And so on. It is highly recommended you use at least a 2048-bit key. If you need to encrypt more data just break it down into blocks of ... 1 Typically, what we do when we encrypt a large piece of text with RSA is: We select a random symmetric key (perhaps, an AES key) We encrypt that symmetric key with RSA We then use that symmetric key to encrypt the actual message. So, as far as the RSA algorithm is concerned, it doesn't matter how long the message is. This is called a hybrid cryptosystem. ... 1 The problem is to reliably and efficiently find message $m$ (with $0\le m<n$) given RSA modulus $n$, distinct RSA public exponents $e_1$ and $e_2$ coprime to each others and to the unknown $\phi(n)$, and ciphertexts $c_1=m^{e_1}\bmod n$ and $c_2=m^{e_2}\bmod n$. WLoG, and per the corrected question, $y_1$ is negative when it is applied the extended ... 1 Preliminary on notation: in the question, it seems advisable to change $C=T_{k}M$  to $C=T_{k}(M)$ , and change $M=T^{-1}_{k}C$  to $M={T_k}^{-1}(C)$ it is necessary to change $T_{k}T^{-1}_{k}=I$  to ${T_k}^{-1}\circ T_k=I$ , meaning that the function obtained by applying $T_k$ then its inverse ${T_k}^{-1}$ is identity, with $\forall M, ... 1 I think it covers both symmetric and asymmetric systems. Given a transformation $T_k$, if computing $T^{-1}_k$ is easy then this is a symmetric cryptosystem. On the other hand, if this is difficult than it is an asymmetric one. Note that in this case, we are ''hard-coding'' the key $k$ inside the transformation. Thus we are just given the transformation ... 1 Perhaps a simplified example or two might help you intuitively understand the concept. Let's say you want to let your little brother and sister encrypt messages so that either of them can encrypt any message, but only you can decrypt them. To keep things simple, let's say that the messages are simply numbers from $1$ to $10$. Your siblings have been ... Only top voted, non community-wiki answers of a minimum length are eligible
__label__pos
0.701401
java 关注公众号 jb51net 关闭 首页 > 软件编程 > java > Guava缓存机制 一文带你深入了解Guava的缓存机制 作者:S 缓存在现代编程中的作用非常大,它能提高应用性能,减少数据库压力,简直就是性能优化的利器,本文主要来和大家聊聊Google Guava的缓存机制,感兴趣的小伙伴可以了解下 第1章:引言 大家好,我是小黑,今天咱们聊聊Google Guava的缓存机制。缓存在现代编程中的作用非常大,它能提高应用性能,减少数据库压力,简直就是性能优化的利器。而Guava提供的缓存功能,不仅强大而且使用起来非常灵活。 在咱们深入挖掘之前,先简单说说缓存。缓存,其实就是一种保存数据的手段,目的是在未来的数据请求中,能快速地提供数据。想象一下,如果每次处理相同的数据请求都要去数据库里翻一遍,那效率岂不是很低?缓存就是在这里发挥作用,帮我们节省时间和资源。 第2章:Guava缓存机制概述 现在咱们来聊聊Guava缓存的精髓所在。Guava的缓存机制是建立在这样一个思想上:简单、快速、灵活。它不是要替代其他缓存方案,比如Redis或Memcached,而是提供一个轻量级的本地缓存方案,特别适用于那些对缓存一致性要求不高,但又希望减少对外部存储访问的场景。 Guava缓存与传统的Java缓存有什么不同呢?首先,它更加智能。例如,Guava的缓存可以自动加载新值,也可以根据需求自动刷新缓存,减少了手动管理缓存的麻烦。此外,Guava还提供了各种灵活的配置选项,比如过期策略、最大容量限制等,让你可以根据实际需要灵活地调整缓存行为。 那么,这种缓存是怎么工作的呢?基本上,Guava的缓存机制围绕几个核心组件展开:CacheBuilder、LoadingCache、Cache和CacheLoader。CacheBuilder是构建缓存的起点,它提供了一系列链式方法来配置缓存。LoadingCache和Cache是两种缓存实现,前者自动加载缓存,后者需要手动加载。CacheLoader则是定义数据加载逻辑的地方。 来看个简单的代码示例,了解如何创建和使用Guava缓存: import com.google.common.cache.CacheBuilder; import com.google.common.cache.CacheLoader; import com.google.common.cache.LoadingCache; public class GuavaCacheExample { public static void main(String[] args) { // 创建缓存 LoadingCache<String, String> cache = CacheBuilder.newBuilder() .maximumSize(100) // 最大缓存项数 .build( new CacheLoader<String, String>() { @Override public String load(String key) { return fetchDataFromDatabase(key); // 模拟从数据库加载数据 } } ); // 使用缓存 try { String value = cache.get("key1"); // 获取缓存,若无则自动加载 System.out.println("Value for key1: " + value); } catch (Exception e) { e.printStackTrace(); } } private static String fetchDataFromDatabase(String key) { // 模拟数据库操作 return "Data for " + key; } } 第3章:Guava缓存的核心组件 Guava提供了几个非常实用的组件,它们是CacheBuilder、LoadingCache、Cache和CacheLoader。这些组件共同工作,让咱们的缓存管理变得既灵活又高效。 3.1 CacheBuilder 首先,让咱们看看CacheBuilder。这个类真是太棒了,它像个万能工具,帮你构建出各种定制的缓存。想要限制缓存大小?没问题。想要设置过期时间?一样行。它就像乐高积木,可以根据需求搭建出你想要的缓存结构。下面是个示例,展示了如何使用CacheBuilder创建一个简单的缓存: LoadingCache<Key, Graph> graphs = CacheBuilder.newBuilder() .maximumSize(1000) .expireAfterWrite(10, TimeUnit.MINUTES) .removalListener(MY_LISTENER) .build( new CacheLoader<Key, Graph>() { public Graph load(Key key) throws AnyException { return createExpensiveGraph(key); } }); 在这段代码中,小黑创建了一个最大容量为1000的缓存,设置了10分钟的写入过期时间,并且还添加了一个移除监听器。 3.2 LoadingCache和Cache 接下来是LoadingCache和Cache。这两个接口真是让人爱不释手。LoadingCache可以自动加载缓存,当你尝试获取一个缓存项时,如果它不存在,Guava就会自动调用你定义的加载函数去获取数据。而Cache则更灵活,它允许你手动控制何时加载数据。 String graph = graphs.getUnchecked(key); 这段代码演示了如何从LoadingCache中获取数据。如果key对应的数据不存在,Guava会自动调用CacheLoader来加载数据。 3.3 CacheLoader 最后,但同样重要的是CacheLoader。这个抽象类定义了数据加载的逻辑。你只需要实现load方法,当缓存中没有对应的数据时,Guava就会调用它来加载新数据。 public class MyCacheLoader extends CacheLoader<Key, Graph> { @Override public Graph load(Key key) { return fetchData(key); // 你的数据加载逻辑 } } 在这个例子中,小黑创建了一个CacheLoader子类,实现了加载数据的逻辑。 通过这些组件的组合,咱们可以灵活地创建出各种强大的缓存解决方案,满足不同的业务需求。Guava的缓存机制不仅强大,而且非常灵活和高效! 第4章:Guava缓存的实际应用 Guava的缓存不仅仅是理论上的高大上,它在实战中更是大放异彩。让我们看看如何将这些理论知识转化为实际的代码,解决真正的问题。 首先,想象一下这样一个场景:咱们需要从数据库中获取用户信息,但是频繁的数据库查询会导致性能问题。这时,Guava的缓存就能大显身手了。下面是一个使用Guava缓存来优化数据库查询的示例: // 创建一个简单的缓存,用于存储用户信息 LoadingCache<String, User> userCache = CacheBuilder.newBuilder() .maximumSize(100) // 设置最大缓存项数 .expireAfterWrite(10, TimeUnit.MINUTES) // 设置写入后的过期时间 .build(new CacheLoader<String, User>() { @Override public User load(String userId) throws Exception { return getUserFromDatabase(userId); // 数据库查询逻辑 } }); // 使用缓存获取用户信息 public User getUser(String userId) { try { return userCache.get(userId); // 尝试从缓存获取,缓存不存在时自动加载 } catch (ExecutionException e) { throw new RuntimeException("Error fetching user from cache", e); } } private User getUserFromDatabase(String userId) { // 这里是从数据库获取用户的逻辑 // 假设这是一个耗时的操作 return new User(userId, "name", "email"); } 在这个例子里,小黑创建了一个LoadingCache,它在缓存中自动管理用户信息。当需要用户信息时,首先尝试从缓存中获取,如果缓存中没有,则自动调用getUserFromDatabase方法去数据库中查询并加载数据。这样,就大大减少了对数据库的访问次数,提高了应用的性能。 再来说说缓存策略。Guava提供了很多灵活的缓存策略,例如基于容量、定时过期和基于引用的回收等。这些策略可以帮助咱们灵活地管理缓存,满足不同场景的需求。比如,在一个内存敏感的应用中,咱们可能会采用软引用或弱引用缓存,这样当内存不足时,缓存可以被垃圾回收器回收,避免内存泄漏。 第5章:Guava缓存的高级特性和技巧 Guava的缓存不仅基础强大,而且提供了许多高级功能,可以帮助咱们更精细地控制缓存行为。 5.1 过期策略 首先来聊聊过期策略。Guava提供了两种类型的过期策略:基于时间的过期和基于访问的过期。基于时间的过期可以细分为写入过期和访问过期。写入过期意味着从最后一次写入开始计时,一旦超过设定时间,缓存项就会过期。而访问过期则是从最后一次读或写开始计时。 Cache<String, String> cache = CacheBuilder.newBuilder() .expireAfterWrite(10, TimeUnit.MINUTES) // 写入过期 .expireAfterAccess(5, TimeUnit.MINUTES) // 访问过期 .build(); 在这个代码中,小黑设置了一个缓存项在写入10分钟后过期,或者在最后一次访问5分钟后过期。 5.2 弱引用和软引用缓存 接下来是弱引用和软引用缓存。这两种缓存方式在处理大型对象和敏感内存环境时特别有用。软引用缓存项在内存不足时会被垃圾回收器回收,而弱引用缓存项则在垃圾回收时总会被回收。 Cache<String, BigObject> softCache = CacheBuilder.newBuilder() .softValues() .build(); Cache<String, BigObject> weakCache = CacheBuilder.newBuilder() .weakValues() .build(); 在这里,小黑创建了两个缓存,一个用软引用存储大对象,另一个用弱引用。 5.3 显式清除和自动清除策略 Guava还支持显式清除和自动清除策略。显式清除是指手动移除缓存项,而自动清除则是基于某些条件自动移除。 cache.invalidate(key); // 显式清除单个键 cache.invalidateAll(keys); // 显式清除多个键 cache.invalidateAll(); // 清除所有缓存项 在这段代码中,小黑演示了如何显式地从缓存中移除对象。 通过这些高级特性和技巧,Guava的缓存不仅能处理一般的缓存需求,还能解决更复杂和特定的场景,真正实现了高效和灵活的缓存管理。 第6章:性能优化和注意事项 6.1 性能优化 首先,性能优化。Guava缓存的性能非常依赖于它的配置。比如,合理设置缓存的大小和过期时间可以显著影响性能。如果缓存太大,可能会占用过多内存;如果设置得太小,又会频繁地加载数据,导致性能下降。 Cache<String, Data> cache = CacheBuilder.newBuilder() .maximumSize(1000) // 合理的最大大小 .expireAfterAccess(10, TimeUnit.MINUTES) // 合适的过期时间 .build(); 在这段代码中,小黑设置了一个最大大小和过期时间,这两个参数都是基于对应用程序的理解和实际需求设定的。 6.2 注意事项 接下来是一些注意事项。在使用Guava缓存时,要特别注意缓存的一致性和更新策略。例如,如果缓存的数据在数据库中被修改了,缓存中的数据也需要相应更新。这就需要合理地设计缓存的刷新机制。 当使用基于引用的缓存(如软引用或弱引用)时,需要了解Java的垃圾回收机制。这类缓存可能会受到垃圾回收的影响,导致缓存项提前被清除。 Guava缓存在多线程环境中是线程安全的,但在并发高的情况下,可能会成为瓶颈。因此,在高并发场景下,合理调整并发级别是提高性能的关键。 第7章:总结 通过这些章节,咱们一起走过了Guava缓存的创建、配置、使用以及优化的全过程。咱们可以看到,Guava不仅提供了强大的缓存功能,还有各种灵活的配置选项,能够满足多样化的应用场景。 无论是在哪个领域,正确的工具能够大大提高工作效率。Guava缓存正是这样的工具,它能帮助咱们优化Java应用的性能,提高代码的可读性和可维护性。当然,像任何强大的工具一样,正确地使用Guava缓存至关重要。通过这些章节,希望咱们都能更好地理解并运用它! 以上就是一文带你深入了解Guava的缓存机制的详细内容,更多关于Guava缓存机制的资料请关注脚本之家其它相关文章! 您可能感兴趣的文章: 阅读全文
__label__pos
0.809364
How to Display the Pdf File In Iframe In Laravel? 5 minutes read To display a PDF file in an iframe in Laravel, you can use the HTML tag with the source attribute pointing to the PDF file's URL. You can pass the URL to the view from the controller and then use it in the iframe tag in your blade template. Make sure the PDF file's URL is publicly accessible and that the iframe dimensions are set appropriately to display the PDF content accurately. You may also need to set the iframe's style to ensure it fits within the layout of your application. What is the impact of responsive design when displaying a PDF file in an iframe in Larvel? Responsive design ensures that the PDF file displayed in an iframe in Laravel will adapt to different screen sizes and devices. This means that the PDF file will be rendered effectively, regardless of the device being used to view it. With responsive design, the PDF file will automatically adjust its layout and size to fit the screen, preserving legibility and usability. Additionally, responsive design can improve the overall user experience by ensuring that the PDF file is displayed in a visually appealing way that is easy to read and interact with. This can help prevent issues such as the need to zoom in and out of the PDF file in order to read its contents, making it more accessible to users. Overall, the impact of responsive design when displaying a PDF file in an iframe in Laravel is a more user-friendly and adaptable experience for viewers, regardless of the device they are using. What is the proper way to secure the PDF file displayed in an iframe in Larvel? One way to secure a PDF file displayed in an iframe in Laravel is by making sure that the PDF file is not publicly accessible. This can be achieved by storing the PDF file in a directory that is not accessible to the public and then using Laravel to serve the file through a route. Here is an example of how to secure a PDF file in an iframe in Laravel: 1. Store the PDF file in a directory that is not publicly accessible, such as the "storage" directory in Laravel. 2. Create a route in your Laravel application that serves the PDF file. For example, you could create a route like this in your routes/web.php file: 1 2 3 4 5 Route::get('/pdf/{filename}', function ($filename) { $path = storage_path('app/' . $filename); return response()->file($path); }); 1. In your view file where you want to display the PDF file in an iframe, use the route that you created to load the PDF file. For example, you could use an iframe like this: 1 <iframe src="{{ url('/pdf/sample.pdf') }}" width="100%" height="600"></iframe> By storing the PDF file in a non-publicly accessible directory and serving it through a Laravel route, you can ensure that the PDF file is secure and only accessible to authorized users. How to dynamically load a PDF file in an iframe in Larvel? To dynamically load a PDF file in an iframe in Laravel, you can use the following steps: 1. Create a route in your Laravel application to handle the PDF file request. For example: 1 Route::get('view-pdf/{pdfFileName}', 'PdfController@viewPdf')->name('view-pdf'); 1. Create a controller called PdfController which will handle the PDF file request: 1 php artisan make:controller PdfController 1. In PdfController, create a method to load the PDF file in an iframe: 1 2 3 4 5 public function viewPdf($pdfFileName) { $pdfPath = public_path('pdf/' . $pdfFileName); return response()->file($pdfPath); } 1. Upload the PDF file to the public/pdf directory in your Laravel application. 2. In your Blade view, add an iframe element to load the PDF file dynamically: 1 <iframe src="{{ route('view-pdf', ['pdfFileName' => 'example.pdf']) }}" width="100%" height="500px"></iframe> Replace 'example.pdf' with the name of the PDF file you want to load dynamically. 1. Now, when you navigate to the route where you have included the iframe element, the PDF file will be loaded dynamically in the iframe. Note: Make sure to handle error checking and validation in your controller to prevent unauthorized access to PDF files. How to style the iframe for better PDF display in Larvel? To style the iframe for better PDF display in Laravel, you can use CSS to customize the appearance of the iframe. Here are some styling suggestions: 1. Set the width and height of the iframe to match the dimensions of the PDF document. 2. Add a border to the iframe to give it a distinct look. 3. Set the background color of the iframe to match your website's color scheme. 4. Use padding to create space around the PDF document within the iframe. 5. Customize the font family, size, and color of the text displayed in the iframe. You can add these CSS styles to your Laravel view file where you are displaying the iframe. Here is an example of how you can style the iframe using CSS: 1 2 3 4 5 6 7 8 9 10 11 12 <style> iframe { width: 100%; height: 500px; border: 1px solid #ccc; background-color: #f9f9f9; padding: 10px; font-family: Arial, sans-serif; font-size: 14px; color: #333; } </style> You can further customize the styling of the iframe by adding additional CSS properties based on your design preferences. Remember to adjust the dimensions, colors, and fonts to match the overall aesthetic of your Laravel application. How to handle scrolling within the PDF file displayed in the iframe in Larvel? To handle scrolling within the PDF file displayed in an iframe in Laravel, you can use the following steps: 1. Make sure you have included an iframe in your Laravel view file to display the PDF file. You can use the following code snippet to include the iframe: 1 <iframe src="/path/to/your/pdf/file.pdf" width="100%" height="600px"></iframe> 1. If you want to enable scrolling within the iframe, you can add the scrolling="yes" attribute to the iframe tag. This will allow users to scroll within the iframe to view the entire content of the PDF file. 1 <iframe src="/path/to/your/pdf/file.pdf" width="100%" height="600px" scrolling="yes"></iframe> 1. You can also customize the scrolling behavior of the iframe by using CSS. For example, you can set the overflow property of the iframe to auto to enable scrollbars when needed. 1 <iframe src="/path/to/your/pdf/file.pdf" width="100%" height="600px" style="overflow: auto;"></iframe> By following these steps, you can easily handle scrolling within the PDF file displayed in an iframe in Laravel. Adjust the height and width values according to your requirements. Facebook Twitter LinkedIn Telegram Whatsapp Related Posts: To add a watermark image using mPDF in CodeIgniter, you can first create an image object using the mPDF library. Next, you can set the image as a background image using the setwatermarkImage() method. Finally, you can save the PDF file with the watermark image... To download a file in Laravel, you can use the response() method provided by Laravel. First, you need to specify the file path in the response() method and return the response to the user. For example, if you want to download a PDF file located at storage/app/... When developing web applications with Laravel and Vue.js, handling errors effectively is crucial for providing a seamless user experience. Laravel provides a convenient way to handle errors through its exceptions and error handling system.When an error occurs ... To return an array in a config file in Laravel, you can define the array directly within the config file using PHP return statement. Simply create the array with the key-value pairs you need and return it at the end of the file. This allows you to access this ... To enable CORS (Cross-Origin Resource Sharing) in Laravel, you can use the barryvdh/laravel-cors package. First, you need to install the package using Composer by running the following command: composer require barryvdh/laravel-cors.Next, you need to publish t...
__label__pos
0.992703
Tell me more × TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It's 100% free, no registration required. The line \[ \textcolor{green}{\sum}_{k=0}^n \] makes the k=0 and n parts go to the right of Sigma instead of at the top and bottom, as it should be. Can we retain k=0 and n at the top and bottom? share|improve this question 2   If you often use color in math-mode you may be interested in \mathcolor (avoids problems with spacing): Colored symbols – Qrrbrbirlbel Mar 2 at 21:50 2 Answers up vote 10 down vote accepted The \textcolor macro will undo the operator status of \sum so that both subscript and superscript are placed like they would at an ordinary symbol. The macro \mathop makes the whole sequence (the green sum symbol) an operator again. When you often use colored operator symbols, define a new macro, say \newcommand*{\opcolor}[2]{\mathop{\textcolor{#1}{#2}}} and use it as \opcolor{green}{\sum}. I have also defined a \csum macro based on \opcolor. Code \documentclass{article} \usepackage{xcolor} \newcommand*{\opcolor}[2]{\mathop{\textcolor{#1}{#2}}} \newcommand*{\csum}[1]{\opcolor{#1}\sum} \begin{document} \[ \mathop{\textcolor{green}{\sum}}_{k=0}^n \] \[ \csum{green}_{k=0}^n \] \[ \opcolor{green}{\sum}_{k=0}^n \] \end{document} Output enter image description here share|improve this answer Declare the green symbol as an operator via \mathop. \documentclass{article} \usepackage{amsmath} \usepackage{xcolor} \begin{document} \[ \mathop{\textcolor{green}{\sum}}_{k=0}^n \] \end{document} share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.99964
Diskon kelas baru hingga 25%! Masukkan kupon "lebihcepat" di kelas ini X Logo Koala Skodev mengetik Skodev Belajar coding dalam bahasa Indonesia HELLO WORLD JS DI HTML Tutorial cara membuat hello world pada HTML dengan Javascript JavaScript adalah bahasa pemrograman yang memungkinkan kamu untuk membuat konten web yang interaktif. Bagaimana cara membuat script ‘Hello World’ dengan menggunakan JavaScript di HTML? Setelah membaca artikel ini, kamu akan memahami bagaimana caranya. Penulisan Script JavaScript di HTML Script JavaScript biasanya ditulis di dalam tag <script>. Tag ini bisa diletakkan di dalam <head> atau <body> dokumen HTML. Namun, penempatan yang paling umum adalah sebelum tag penutup </body>. Bagian berikut ini adalah contoh pengkodean JavaScript di bagian body dokumen HTML. <!DOCTYPE html> <html> <body> <h2>Tampilkan JavaScript di HTML</h2> <button type="button" onclick="myFunction()">Klik</button> <p id="demo"></p> <script> function myFunction() { document.getElementById("demo").innerHTML = "Hello World!"; } </script> </body> </html> Penjelasan Kode Mari kita bahas fungsi dan tujuan dari setiap baris kode di dalam contoh tersebut. 1. <!DOCTYPE html>: Baris ini mendefinisikan bahwa dokumen ini adalah dokumen HTML5. 2. <html>: Ini adalah tag pembuka HTML. Semua kode HTML harus ditempatkan di antara tag pembuka dan penutup HTML ini. 3. <body>: Tag ini mendefinisikan body dokumen. 4. <h2> Tampilkan JavaScript di HTML </h2>: Ini adalah header atau judul di halaman web. 5. <button type="button" onclick="myFunction()">Klik</button>: Membuat sebuah tombol yang, saat di-klik, akan memanggil fungsi JavaScript myFunction(). 6. <p id="demo"></p>: Membuat paragraph dengan id “demo”, yang nantinya akan diisi oleh script JavaScript kita. 7. <script>: Ini adalah tag pembuka script. 8. function myFunction() : Mendefinisikan fungsi JavaScript dengan nama “myFunction”. 9. document.getElementById("demo").innerHTML = "Hello World!" : Menulis text “Hello World!” ke dalam elemen yang memiliki id “demo”. 10. </script>: Ini adalah tag penutup script. 11. </body> dan </html> : Tag penutup body dan html. Saat tombol “Klik” ditekan, JavaScript akan mengubah konten elemen dengan id “demo” menjadi “Hello World!“. 👈🏼 Hubungan js dan HTML Menulis js pada console browser 👉🏼
__label__pos
0.990335
Charlie Charlie - 7 months ago 33 Java Question What is meant by "each JVM thread has its own program counter?" I'm trying to understand what this statement means: Each Java Virtual Machine thread has its own pc (program counter) register. At any point, each Java Virtual Machine thread is executing the code of a single method, namely the current method (§2.6) for that thread. https://docs.oracle.com/javase/specs/jvms/se7/html/jvms-2.html#jvms-2.5.1 I assume that the JVM thread works like any other thread - that every time that thread is scheduled to run (by say the Linux kernel) that it's "program counter" is loaded from it's task_struct data structure so from the CPU's perspective there's only a single program counter - it just gets updated by the OS everytime the OS switches threads. Is that correct? I'm confused because that whole page seems to keep emphasizing that each JVM gets it's own PC/stack/heap etc. but I thought that was a given for any process - is the JVM somehow unique from other processes? Answer assume that the JVM works like any other thread The JVM is not a thread: It is a process that has many threads. ...so from the CPU's perspective there's only a single program counter The program counter is just one of several registers that constitute the context of a thread. Each CPU has one set of physical registers (or two sets, if it's hyperthreaded, but let's keep things simple and ignore hyperthreading.) Each CPU can therefore run exactly ONE thread at any given instant time. But... The operating system can "switch contexts": It can save all of the registers for one thread running on a given CPU, and then load the registers with the saved registers (including the program counter) from some other thread. In a typical desktop operating system, the operating system scheduler is invoked, maybe 100 times per second or more, to decide which threads should be running at that moment. It will switch out threads that were actually running at that moment, and switch in threads that have been waiting to run. That way, your computer can have many more live threads than it has CPUs.
__label__pos
0.781341
COIN-OR::LEMON - Graph Library Changeset 769:eb61fbc64c16 in lemon-0.x Ignore: Timestamp: 08/23/04 13:26:09 (18 years ago) Author: marci Branch: default Phase: public Convert: svn:c9d7d8f5-90d6-0310-b91f-818b3a526b0e/lemon/trunk@1032 Message: . Location: src/work/marci/leda Files: 3 edited Legend: Unmodified Added Removed • src/work/marci/leda/bipartite_matching_leda.cc r648 r769   1515//#include <dimacs.h> 1616#include <hugo/time_measure.h> 17 #include <hugo/for_each_macros.h>  17#include <for_each_macros.h> 1818#include <hugo/graph_wrapper.h> 1919#include <bipartite_graph_wrapper.h> 2020#include <hugo/maps.h> 21 #include <max_flow.h>  21#include <hugo/max_flow.h> 2222 2323/**   9696  BGW::EdgeMap<int> dbyxcj(bgw); 9797 98   typedef stGraphWrapper<BGW> stGW;  98  typedef stBipartiteGraphWrapper<BGW> stGW; 9999  stGW stgw(bgw); 100100  ConstMap<stGW::Edge, int> const1map(1);   110110  typedef SageGraph MutableGraph; 111111//  while (max_flow_test.augmentOnBlockingFlow1<MutableGraph>()) { 112   while (max_flow_test.augmentOnBlockingFlow2()) { 113    std::cout << max_flow_test.flowValue() << std::endl; 114   }  112//   while (max_flow_test.augmentOnBlockingFlow2()) {  113//    std::cout << max_flow_test.flowValue() << std::endl;  114//   }  115  max_flow_test.run(); 115116  std::cout << "max flow value: " << max_flow_test.flowValue() << std::endl; 116117  std::cout << "elapsed time: " << ts << std::endl; • src/work/marci/leda/bipartite_matching_leda_gen.cc r768 r769   1515//#include <dimacs.h> 1616#include <hugo/time_measure.h> 17 #include <hugo/for_each_macros.h>  17#include <for_each_macros.h> 1818#include <hugo/graph_wrapper.h> 1919#include <bipartite_graph_wrapper.h> 2020#include <hugo/maps.h> 21 #include <max_flow.h>  21#include <hugo/max_flow.h>  22#include <augmenting_flow.h> 2223 2324/**   126127  FOR_EACH_LOC(stGW::EdgeIt, e, stgw) flow.set(e, 0); 127128  typedef SageGraph MutableGraph; 128   while (max_flow_test.augmentOnBlockingFlow<MutableGraph>()) { }  129  AugmentingFlow<stGW, int, ConstMap<stGW::Edge, int>, stGW::EdgeMap<int> >  130    max_flow_test_1(stgw, stgw.S_NODE, stgw.T_NODE, const1map, flow/*, true*/);  131  while (max_flow_test_1.augmentOnBlockingFlow<MutableGraph>()) { } 129132  std::cout << "HUGO max matching algorithm based on blocking flow augmentation." 130133            << std::endl << "Matching size: " 131             << max_flow_test.flowValue() << std::endl;  134            << max_flow_test_1.flowValue() << std::endl; 132135  std::cout << "elapsed time: " << ts << std::endl; 133136 • src/work/marci/leda/comparison.cc r768 r769   1515//#include <dimacs.h> 1616#include <hugo/time_measure.h> 17 #include <hugo/for_each_macros.h>  17#include <for_each_macros.h> 1818#include <hugo/graph_wrapper.h> 1919#include <bipartite_graph_wrapper.h> 2020#include <hugo/maps.h> 21 #include <max_flow.h>  21#include <hugo/max_flow.h> 2222 23 /** 24  * Inicializalja a veletlenszamgeneratort. 25  * Figyelem, ez nem jo igazi random szamokhoz, 26  * erre ne bizzad a titkaidat! 27  */ 28 void random_init() 29 { 30         unsigned int seed = getpid(); 31         seed |= seed << 15; 32         seed ^= time(0); 33  34         srand(seed); 35 } 36  37 /** 38  * Egy veletlen int-et ad vissza 0 es m-1 kozott. 39  */ 40 int random(int m) 41 { 42   return int( double(m) * rand() / (RAND_MAX + 1.0) ); 43 }  23using std::cout;  24using std::endl; 4425 4526using namespace hugo;   6647 6748  int a; 68   std::cout << "number of nodes in the first color class=";  49  cout << "number of nodes in the first color class="; 6950  std::cin >> a; 7051  int b; 71   std::cout << "number of nodes in the second color class=";  52  cout << "number of nodes in the second color class="; 7253  std::cin >> b; 7354  int m; 74   std::cout << "number of edges=";  55  cout << "number of edges="; 7556  std::cin >> m; 7657  int k; 77   std::cout << "A bipartite graph is a random group graph if the color classes \nA and B are partitiones to A_0, A_1, ..., A_{k-1} and B_0, B_1, ..., B_{k-1} \nas equally as possible \nand the edges from A_i goes to A_{i-1 mod k} and A_{i+1 mod k}.\n"; 78   std::cout << "number of groups in LEDA random group graph=";  58  cout << "A bipartite graph is a random group graph if the color classes \nA and B are partitiones to A_0, A_1, ..., A_{k-1} and B_0, B_1, ..., B_{k-1} \nas equally as possible \nand the edges from A_i goes to A_{i-1 mod k} and A_{i+1 mod k}.\n";  59  cout << "number of groups in LEDA random group graph="; 7960  std::cin >> k; 80   std::cout << std::endl;  61  cout << endl; 8162  8263  leda_list<leda_node> lS;   11091    max_flow_test(stgw, stgw.S_NODE, stgw.T_NODE, const1map, flow/*, true*/); 11192  max_flow_test.run(); 112   std::cout << "HUGO max matching algorithm based on preflow." << std::endl  93  cout << "HUGO max matching algorithm based on preflow." << endl 11394            << "Size of matching: " 114             << max_flow_test.flowValue() << std::endl; 115   std::cout << "elapsed time: " << ts << std::endl << std::endl;  95            << max_flow_test.flowValue() << endl;  96  cout << "elapsed time: " << ts << endl << endl; 11697 11798  ts.reset();  11899  leda_list<leda_edge> ml=MAX_CARD_BIPARTITE_MATCHING(lg); 119   std::cout << "LEDA max matching algorithm." << std::endl  100  cout << "LEDA max matching algorithm." << endl 120101            << "Size of matching: " 121             << ml.size() << std::endl; 122   std::cout << "elapsed time: " << ts << std::endl << std::endl;  102            << ml.size() << endl;  103  cout << "elapsed time: " << ts << endl << endl; 123104 124105//   ts.reset();   126107//   typedef SageGraph MutableGraph; 127108//   while (max_flow_test.augmentOnBlockingFlow<MutableGraph>()) { } 128 //   std::cout << "HUGO max matching algorithm based on blocking flow augmentation." 129 //          << std::endl << "Matching size: " 130 //          << max_flow_test.flowValue() << std::endl; 131 //   std::cout << "elapsed time: " << ts << std::endl << std::endl;  109//   cout << "HUGO max matching algorithm based on blocking flow augmentation."  110//          << endl << "Matching size: "  111//          << max_flow_test.flowValue() << endl;  112//   cout << "elapsed time: " << ts << endl << endl; 132113 133114  {   160141    max_flow_test(hg, s, t, cm, flow); 161142  max_flow_test.run(); 162   std::cout << "HUGO max matching algorithm on SageGraph by copying the graph, based on preflow." 163             << std::endl  143  cout << "HUGO max matching algorithm on SageGraph by copying the graph, based on preflow."  144            << endl 164145            << "Size of matching: " 165             << max_flow_test.flowValue() << std::endl; 166   std::cout << "elapsed time: " << ts << std::endl << std::endl;  146            << max_flow_test.flowValue() << endl;  147  cout << "elapsed time: " << ts << endl << endl; 167148  } 168149 Note: See TracChangeset for help on using the changeset viewer.
__label__pos
0.959567
genproto: google.golang.org/genproto/googleapis/cloud/vision/v1p1beta1 Index | Files package vision import "google.golang.org/genproto/googleapis/cloud/vision/v1p1beta1" Index Package Files geometry.pb.go image_annotator.pb.go text_annotation.pb.go web_detection.pb.go Variables var ( Likelihood_name = map[int32]string{ 0: "UNKNOWN", 1: "VERY_UNLIKELY", 2: "UNLIKELY", 3: "POSSIBLE", 4: "LIKELY", 5: "VERY_LIKELY", } Likelihood_value = map[string]int32{ "UNKNOWN": 0, "VERY_UNLIKELY": 1, "UNLIKELY": 2, "POSSIBLE": 3, "LIKELY": 4, "VERY_LIKELY": 5, } ) Enum value maps for Likelihood. var ( Feature_Type_name = map[int32]string{ 0: "TYPE_UNSPECIFIED", 1: "FACE_DETECTION", 2: "LANDMARK_DETECTION", 3: "LOGO_DETECTION", 4: "LABEL_DETECTION", 5: "TEXT_DETECTION", 11: "DOCUMENT_TEXT_DETECTION", 6: "SAFE_SEARCH_DETECTION", 7: "IMAGE_PROPERTIES", 9: "CROP_HINTS", 10: "WEB_DETECTION", } Feature_Type_value = map[string]int32{ "TYPE_UNSPECIFIED": 0, "FACE_DETECTION": 1, "LANDMARK_DETECTION": 2, "LOGO_DETECTION": 3, "LABEL_DETECTION": 4, "TEXT_DETECTION": 5, "DOCUMENT_TEXT_DETECTION": 11, "SAFE_SEARCH_DETECTION": 6, "IMAGE_PROPERTIES": 7, "CROP_HINTS": 9, "WEB_DETECTION": 10, } ) Enum value maps for Feature_Type. var ( FaceAnnotation_Landmark_Type_name = map[int32]string{ 0: "UNKNOWN_LANDMARK", 1: "LEFT_EYE", 2: "RIGHT_EYE", 3: "LEFT_OF_LEFT_EYEBROW", 4: "RIGHT_OF_LEFT_EYEBROW", 5: "LEFT_OF_RIGHT_EYEBROW", 6: "RIGHT_OF_RIGHT_EYEBROW", 7: "MIDPOINT_BETWEEN_EYES", 8: "NOSE_TIP", 9: "UPPER_LIP", 10: "LOWER_LIP", 11: "MOUTH_LEFT", 12: "MOUTH_RIGHT", 13: "MOUTH_CENTER", 14: "NOSE_BOTTOM_RIGHT", 15: "NOSE_BOTTOM_LEFT", 16: "NOSE_BOTTOM_CENTER", 17: "LEFT_EYE_TOP_BOUNDARY", 18: "LEFT_EYE_RIGHT_CORNER", 19: "LEFT_EYE_BOTTOM_BOUNDARY", 20: "LEFT_EYE_LEFT_CORNER", 21: "RIGHT_EYE_TOP_BOUNDARY", 22: "RIGHT_EYE_RIGHT_CORNER", 23: "RIGHT_EYE_BOTTOM_BOUNDARY", 24: "RIGHT_EYE_LEFT_CORNER", 25: "LEFT_EYEBROW_UPPER_MIDPOINT", 26: "RIGHT_EYEBROW_UPPER_MIDPOINT", 27: "LEFT_EAR_TRAGION", 28: "RIGHT_EAR_TRAGION", 29: "LEFT_EYE_PUPIL", 30: "RIGHT_EYE_PUPIL", 31: "FOREHEAD_GLABELLA", 32: "CHIN_GNATHION", 33: "CHIN_LEFT_GONION", 34: "CHIN_RIGHT_GONION", } FaceAnnotation_Landmark_Type_value = map[string]int32{ "UNKNOWN_LANDMARK": 0, "LEFT_EYE": 1, "RIGHT_EYE": 2, "LEFT_OF_LEFT_EYEBROW": 3, "RIGHT_OF_LEFT_EYEBROW": 4, "LEFT_OF_RIGHT_EYEBROW": 5, "RIGHT_OF_RIGHT_EYEBROW": 6, "MIDPOINT_BETWEEN_EYES": 7, "NOSE_TIP": 8, "UPPER_LIP": 9, "LOWER_LIP": 10, "MOUTH_LEFT": 11, "MOUTH_RIGHT": 12, "MOUTH_CENTER": 13, "NOSE_BOTTOM_RIGHT": 14, "NOSE_BOTTOM_LEFT": 15, "NOSE_BOTTOM_CENTER": 16, "LEFT_EYE_TOP_BOUNDARY": 17, "LEFT_EYE_RIGHT_CORNER": 18, "LEFT_EYE_BOTTOM_BOUNDARY": 19, "LEFT_EYE_LEFT_CORNER": 20, "RIGHT_EYE_TOP_BOUNDARY": 21, "RIGHT_EYE_RIGHT_CORNER": 22, "RIGHT_EYE_BOTTOM_BOUNDARY": 23, "RIGHT_EYE_LEFT_CORNER": 24, "LEFT_EYEBROW_UPPER_MIDPOINT": 25, "RIGHT_EYEBROW_UPPER_MIDPOINT": 26, "LEFT_EAR_TRAGION": 27, "RIGHT_EAR_TRAGION": 28, "LEFT_EYE_PUPIL": 29, "RIGHT_EYE_PUPIL": 30, "FOREHEAD_GLABELLA": 31, "CHIN_GNATHION": 32, "CHIN_LEFT_GONION": 33, "CHIN_RIGHT_GONION": 34, } ) Enum value maps for FaceAnnotation_Landmark_Type. var ( TextAnnotation_DetectedBreak_BreakType_name = map[int32]string{ 0: "UNKNOWN", 1: "SPACE", 2: "SURE_SPACE", 3: "EOL_SURE_SPACE", 4: "HYPHEN", 5: "LINE_BREAK", } TextAnnotation_DetectedBreak_BreakType_value = map[string]int32{ "UNKNOWN": 0, "SPACE": 1, "SURE_SPACE": 2, "EOL_SURE_SPACE": 3, "HYPHEN": 4, "LINE_BREAK": 5, } ) Enum value maps for TextAnnotation_DetectedBreak_BreakType. var ( Block_BlockType_name = map[int32]string{ 0: "UNKNOWN", 1: "TEXT", 2: "TABLE", 3: "PICTURE", 4: "RULER", 5: "BARCODE", } Block_BlockType_value = map[string]int32{ "UNKNOWN": 0, "TEXT": 1, "TABLE": 2, "PICTURE": 3, "RULER": 4, "BARCODE": 5, } ) Enum value maps for Block_BlockType. var File_google_cloud_vision_v1p1beta1_geometry_proto protoreflect.FileDescriptor var File_google_cloud_vision_v1p1beta1_image_annotator_proto protoreflect.FileDescriptor var File_google_cloud_vision_v1p1beta1_text_annotation_proto protoreflect.FileDescriptor var File_google_cloud_vision_v1p1beta1_web_detection_proto protoreflect.FileDescriptor func RegisterImageAnnotatorServer Uses func RegisterImageAnnotatorServer(s *grpc.Server, srv ImageAnnotatorServer) type AnnotateImageRequest Uses type AnnotateImageRequest struct { // The image to be processed. Image *Image `protobuf:"bytes,1,opt,name=image,proto3" json:"image,omitempty"` // Requested features. Features []*Feature `protobuf:"bytes,2,rep,name=features,proto3" json:"features,omitempty"` // Additional context that may accompany the image. ImageContext *ImageContext `protobuf:"bytes,3,opt,name=image_context,json=imageContext,proto3" json:"image_context,omitempty"` // contains filtered or unexported fields } Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features. func (*AnnotateImageRequest) Descriptor Uses func (*AnnotateImageRequest) Descriptor() ([]byte, []int) Deprecated: Use AnnotateImageRequest.ProtoReflect.Descriptor instead. func (*AnnotateImageRequest) GetFeatures Uses func (x *AnnotateImageRequest) GetFeatures() []*Feature func (*AnnotateImageRequest) GetImage Uses func (x *AnnotateImageRequest) GetImage() *Image func (*AnnotateImageRequest) GetImageContext Uses func (x *AnnotateImageRequest) GetImageContext() *ImageContext func (*AnnotateImageRequest) ProtoMessage Uses func (*AnnotateImageRequest) ProtoMessage() func (*AnnotateImageRequest) ProtoReflect Uses func (x *AnnotateImageRequest) ProtoReflect() protoreflect.Message func (*AnnotateImageRequest) Reset Uses func (x *AnnotateImageRequest) Reset() func (*AnnotateImageRequest) String Uses func (x *AnnotateImageRequest) String() string type AnnotateImageResponse Uses type AnnotateImageResponse struct { // If present, face detection has completed successfully. FaceAnnotations []*FaceAnnotation `protobuf:"bytes,1,rep,name=face_annotations,json=faceAnnotations,proto3" json:"face_annotations,omitempty"` // If present, landmark detection has completed successfully. LandmarkAnnotations []*EntityAnnotation `protobuf:"bytes,2,rep,name=landmark_annotations,json=landmarkAnnotations,proto3" json:"landmark_annotations,omitempty"` // If present, logo detection has completed successfully. LogoAnnotations []*EntityAnnotation `protobuf:"bytes,3,rep,name=logo_annotations,json=logoAnnotations,proto3" json:"logo_annotations,omitempty"` // If present, label detection has completed successfully. LabelAnnotations []*EntityAnnotation `protobuf:"bytes,4,rep,name=label_annotations,json=labelAnnotations,proto3" json:"label_annotations,omitempty"` // If present, text (OCR) detection has completed successfully. TextAnnotations []*EntityAnnotation `protobuf:"bytes,5,rep,name=text_annotations,json=textAnnotations,proto3" json:"text_annotations,omitempty"` // If present, text (OCR) detection or document (OCR) text detection has // completed successfully. // This annotation provides the structural hierarchy for the OCR detected // text. FullTextAnnotation *TextAnnotation `protobuf:"bytes,12,opt,name=full_text_annotation,json=fullTextAnnotation,proto3" json:"full_text_annotation,omitempty"` // If present, safe-search annotation has completed successfully. SafeSearchAnnotation *SafeSearchAnnotation `protobuf:"bytes,6,opt,name=safe_search_annotation,json=safeSearchAnnotation,proto3" json:"safe_search_annotation,omitempty"` // If present, image properties were extracted successfully. ImagePropertiesAnnotation *ImageProperties `protobuf:"bytes,8,opt,name=image_properties_annotation,json=imagePropertiesAnnotation,proto3" json:"image_properties_annotation,omitempty"` // If present, crop hints have completed successfully. CropHintsAnnotation *CropHintsAnnotation `protobuf:"bytes,11,opt,name=crop_hints_annotation,json=cropHintsAnnotation,proto3" json:"crop_hints_annotation,omitempty"` // If present, web detection has completed successfully. WebDetection *WebDetection `protobuf:"bytes,13,opt,name=web_detection,json=webDetection,proto3" json:"web_detection,omitempty"` // If set, represents the error message for the operation. // Note that filled-in image annotations are guaranteed to be // correct, even when `error` is set. Error *status.Status `protobuf:"bytes,9,opt,name=error,proto3" json:"error,omitempty"` // contains filtered or unexported fields } Response to an image annotation request. func (*AnnotateImageResponse) Descriptor Uses func (*AnnotateImageResponse) Descriptor() ([]byte, []int) Deprecated: Use AnnotateImageResponse.ProtoReflect.Descriptor instead. func (*AnnotateImageResponse) GetCropHintsAnnotation Uses func (x *AnnotateImageResponse) GetCropHintsAnnotation() *CropHintsAnnotation func (*AnnotateImageResponse) GetError Uses func (x *AnnotateImageResponse) GetError() *status.Status func (*AnnotateImageResponse) GetFaceAnnotations Uses func (x *AnnotateImageResponse) GetFaceAnnotations() []*FaceAnnotation func (*AnnotateImageResponse) GetFullTextAnnotation Uses func (x *AnnotateImageResponse) GetFullTextAnnotation() *TextAnnotation func (*AnnotateImageResponse) GetImagePropertiesAnnotation Uses func (x *AnnotateImageResponse) GetImagePropertiesAnnotation() *ImageProperties func (*AnnotateImageResponse) GetLabelAnnotations Uses func (x *AnnotateImageResponse) GetLabelAnnotations() []*EntityAnnotation func (*AnnotateImageResponse) GetLandmarkAnnotations Uses func (x *AnnotateImageResponse) GetLandmarkAnnotations() []*EntityAnnotation func (*AnnotateImageResponse) GetLogoAnnotations Uses func (x *AnnotateImageResponse) GetLogoAnnotations() []*EntityAnnotation func (*AnnotateImageResponse) GetSafeSearchAnnotation Uses func (x *AnnotateImageResponse) GetSafeSearchAnnotation() *SafeSearchAnnotation func (*AnnotateImageResponse) GetTextAnnotations Uses func (x *AnnotateImageResponse) GetTextAnnotations() []*EntityAnnotation func (*AnnotateImageResponse) GetWebDetection Uses func (x *AnnotateImageResponse) GetWebDetection() *WebDetection func (*AnnotateImageResponse) ProtoMessage Uses func (*AnnotateImageResponse) ProtoMessage() func (*AnnotateImageResponse) ProtoReflect Uses func (x *AnnotateImageResponse) ProtoReflect() protoreflect.Message func (*AnnotateImageResponse) Reset Uses func (x *AnnotateImageResponse) Reset() func (*AnnotateImageResponse) String Uses func (x *AnnotateImageResponse) String() string type BatchAnnotateImagesRequest Uses type BatchAnnotateImagesRequest struct { // Required. Individual image annotation requests for this batch. Requests []*AnnotateImageRequest `protobuf:"bytes,1,rep,name=requests,proto3" json:"requests,omitempty"` // contains filtered or unexported fields } Multiple image annotation requests are batched into a single service call. func (*BatchAnnotateImagesRequest) Descriptor Uses func (*BatchAnnotateImagesRequest) Descriptor() ([]byte, []int) Deprecated: Use BatchAnnotateImagesRequest.ProtoReflect.Descriptor instead. func (*BatchAnnotateImagesRequest) GetRequests Uses func (x *BatchAnnotateImagesRequest) GetRequests() []*AnnotateImageRequest func (*BatchAnnotateImagesRequest) ProtoMessage Uses func (*BatchAnnotateImagesRequest) ProtoMessage() func (*BatchAnnotateImagesRequest) ProtoReflect Uses func (x *BatchAnnotateImagesRequest) ProtoReflect() protoreflect.Message func (*BatchAnnotateImagesRequest) Reset Uses func (x *BatchAnnotateImagesRequest) Reset() func (*BatchAnnotateImagesRequest) String Uses func (x *BatchAnnotateImagesRequest) String() string type BatchAnnotateImagesResponse Uses type BatchAnnotateImagesResponse struct { // Individual responses to image annotation requests within the batch. Responses []*AnnotateImageResponse `protobuf:"bytes,1,rep,name=responses,proto3" json:"responses,omitempty"` // contains filtered or unexported fields } Response to a batch image annotation request. func (*BatchAnnotateImagesResponse) Descriptor Uses func (*BatchAnnotateImagesResponse) Descriptor() ([]byte, []int) Deprecated: Use BatchAnnotateImagesResponse.ProtoReflect.Descriptor instead. func (*BatchAnnotateImagesResponse) GetResponses Uses func (x *BatchAnnotateImagesResponse) GetResponses() []*AnnotateImageResponse func (*BatchAnnotateImagesResponse) ProtoMessage Uses func (*BatchAnnotateImagesResponse) ProtoMessage() func (*BatchAnnotateImagesResponse) ProtoReflect Uses func (x *BatchAnnotateImagesResponse) ProtoReflect() protoreflect.Message func (*BatchAnnotateImagesResponse) Reset Uses func (x *BatchAnnotateImagesResponse) Reset() func (*BatchAnnotateImagesResponse) String Uses func (x *BatchAnnotateImagesResponse) String() string type Block Uses type Block struct { // Additional information detected for the block. Property *TextAnnotation_TextProperty `protobuf:"bytes,1,opt,name=property,proto3" json:"property,omitempty"` // The bounding box for the block. // The vertices are in the order of top-left, top-right, bottom-right, // bottom-left. When a rotation of the bounding box is detected the rotation // is represented as around the top-left corner as defined when the text is // read in the 'natural' orientation. // For example: // * when the text is horizontal it might look like: // 0----1 // | | // 3----2 // * when it's rotated 180 degrees around the top-left corner it becomes: // 2----3 // | | // 1----0 // and the vertice order will still be (0, 1, 2, 3). BoundingBox *BoundingPoly `protobuf:"bytes,2,opt,name=bounding_box,json=boundingBox,proto3" json:"bounding_box,omitempty"` // List of paragraphs in this block (if this blocks is of type text). Paragraphs []*Paragraph `protobuf:"bytes,3,rep,name=paragraphs,proto3" json:"paragraphs,omitempty"` // Detected block type (text, image etc) for this block. BlockType Block_BlockType `protobuf:"varint,4,opt,name=block_type,json=blockType,proto3,enum=google.cloud.vision.v1p1beta1.Block_BlockType" json:"block_type,omitempty"` // Confidence of the OCR results on the block. Range [0, 1]. Confidence float32 `protobuf:"fixed32,5,opt,name=confidence,proto3" json:"confidence,omitempty"` // contains filtered or unexported fields } Logical element on the page. func (*Block) Descriptor Uses func (*Block) Descriptor() ([]byte, []int) Deprecated: Use Block.ProtoReflect.Descriptor instead. func (*Block) GetBlockType Uses func (x *Block) GetBlockType() Block_BlockType func (*Block) GetBoundingBox Uses func (x *Block) GetBoundingBox() *BoundingPoly func (*Block) GetConfidence Uses func (x *Block) GetConfidence() float32 func (*Block) GetParagraphs Uses func (x *Block) GetParagraphs() []*Paragraph func (*Block) GetProperty Uses func (x *Block) GetProperty() *TextAnnotation_TextProperty func (*Block) ProtoMessage Uses func (*Block) ProtoMessage() func (*Block) ProtoReflect Uses func (x *Block) ProtoReflect() protoreflect.Message func (*Block) Reset Uses func (x *Block) Reset() func (*Block) String Uses func (x *Block) String() string type Block_BlockType Uses type Block_BlockType int32 Type of a block (text, image etc) as identified by OCR. const ( // Unknown block type. Block_UNKNOWN Block_BlockType = 0 // Regular text block. Block_TEXT Block_BlockType = 1 // Table block. Block_TABLE Block_BlockType = 2 // Image block. Block_PICTURE Block_BlockType = 3 // Horizontal/vertical line box. Block_RULER Block_BlockType = 4 // Barcode block. Block_BARCODE Block_BlockType = 5 ) func (Block_BlockType) Descriptor Uses func (Block_BlockType) Descriptor() protoreflect.EnumDescriptor func (Block_BlockType) Enum Uses func (x Block_BlockType) Enum() *Block_BlockType func (Block_BlockType) EnumDescriptor Uses func (Block_BlockType) EnumDescriptor() ([]byte, []int) Deprecated: Use Block_BlockType.Descriptor instead. func (Block_BlockType) Number Uses func (x Block_BlockType) Number() protoreflect.EnumNumber func (Block_BlockType) String Uses func (x Block_BlockType) String() string func (Block_BlockType) Type Uses func (Block_BlockType) Type() protoreflect.EnumType type BoundingPoly Uses type BoundingPoly struct { // The bounding polygon vertices. Vertices []*Vertex `protobuf:"bytes,1,rep,name=vertices,proto3" json:"vertices,omitempty"` // contains filtered or unexported fields } A bounding polygon for the detected image annotation. func (*BoundingPoly) Descriptor Uses func (*BoundingPoly) Descriptor() ([]byte, []int) Deprecated: Use BoundingPoly.ProtoReflect.Descriptor instead. func (*BoundingPoly) GetVertices Uses func (x *BoundingPoly) GetVertices() []*Vertex func (*BoundingPoly) ProtoMessage Uses func (*BoundingPoly) ProtoMessage() func (*BoundingPoly) ProtoReflect Uses func (x *BoundingPoly) ProtoReflect() protoreflect.Message func (*BoundingPoly) Reset Uses func (x *BoundingPoly) Reset() func (*BoundingPoly) String Uses func (x *BoundingPoly) String() string type ColorInfo Uses type ColorInfo struct { // RGB components of the color. Color *color.Color `protobuf:"bytes,1,opt,name=color,proto3" json:"color,omitempty"` // Image-specific score for this color. Value in range [0, 1]. Score float32 `protobuf:"fixed32,2,opt,name=score,proto3" json:"score,omitempty"` // The fraction of pixels the color occupies in the image. // Value in range [0, 1]. PixelFraction float32 `protobuf:"fixed32,3,opt,name=pixel_fraction,json=pixelFraction,proto3" json:"pixel_fraction,omitempty"` // contains filtered or unexported fields } Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image. func (*ColorInfo) Descriptor Uses func (*ColorInfo) Descriptor() ([]byte, []int) Deprecated: Use ColorInfo.ProtoReflect.Descriptor instead. func (*ColorInfo) GetColor Uses func (x *ColorInfo) GetColor() *color.Color func (*ColorInfo) GetPixelFraction Uses func (x *ColorInfo) GetPixelFraction() float32 func (*ColorInfo) GetScore Uses func (x *ColorInfo) GetScore() float32 func (*ColorInfo) ProtoMessage Uses func (*ColorInfo) ProtoMessage() func (*ColorInfo) ProtoReflect Uses func (x *ColorInfo) ProtoReflect() protoreflect.Message func (*ColorInfo) Reset Uses func (x *ColorInfo) Reset() func (*ColorInfo) String Uses func (x *ColorInfo) String() string type CropHint Uses type CropHint struct { // The bounding polygon for the crop region. The coordinates of the bounding // box are in the original image's scale, as returned in `ImageParams`. BoundingPoly *BoundingPoly `protobuf:"bytes,1,opt,name=bounding_poly,json=boundingPoly,proto3" json:"bounding_poly,omitempty"` // Confidence of this being a salient region. Range [0, 1]. Confidence float32 `protobuf:"fixed32,2,opt,name=confidence,proto3" json:"confidence,omitempty"` // Fraction of importance of this salient region with respect to the original // image. ImportanceFraction float32 `protobuf:"fixed32,3,opt,name=importance_fraction,json=importanceFraction,proto3" json:"importance_fraction,omitempty"` // contains filtered or unexported fields } Single crop hint that is used to generate a new crop when serving an image. func (*CropHint) Descriptor Uses func (*CropHint) Descriptor() ([]byte, []int) Deprecated: Use CropHint.ProtoReflect.Descriptor instead. func (*CropHint) GetBoundingPoly Uses func (x *CropHint) GetBoundingPoly() *BoundingPoly func (*CropHint) GetConfidence Uses func (x *CropHint) GetConfidence() float32 func (*CropHint) GetImportanceFraction Uses func (x *CropHint) GetImportanceFraction() float32 func (*CropHint) ProtoMessage Uses func (*CropHint) ProtoMessage() func (*CropHint) ProtoReflect Uses func (x *CropHint) ProtoReflect() protoreflect.Message func (*CropHint) Reset Uses func (x *CropHint) Reset() func (*CropHint) String Uses func (x *CropHint) String() string type CropHintsAnnotation Uses type CropHintsAnnotation struct { // Crop hint results. CropHints []*CropHint `protobuf:"bytes,1,rep,name=crop_hints,json=cropHints,proto3" json:"crop_hints,omitempty"` // contains filtered or unexported fields } Set of crop hints that are used to generate new crops when serving images. func (*CropHintsAnnotation) Descriptor Uses func (*CropHintsAnnotation) Descriptor() ([]byte, []int) Deprecated: Use CropHintsAnnotation.ProtoReflect.Descriptor instead. func (*CropHintsAnnotation) GetCropHints Uses func (x *CropHintsAnnotation) GetCropHints() []*CropHint func (*CropHintsAnnotation) ProtoMessage Uses func (*CropHintsAnnotation) ProtoMessage() func (*CropHintsAnnotation) ProtoReflect Uses func (x *CropHintsAnnotation) ProtoReflect() protoreflect.Message func (*CropHintsAnnotation) Reset Uses func (x *CropHintsAnnotation) Reset() func (*CropHintsAnnotation) String Uses func (x *CropHintsAnnotation) String() string type CropHintsParams Uses type CropHintsParams struct { // Aspect ratios in floats, representing the ratio of the width to the height // of the image. For example, if the desired aspect ratio is 4/3, the // corresponding float value should be 1.33333. If not specified, the // best possible crop is returned. The number of provided aspect ratios is // limited to a maximum of 16; any aspect ratios provided after the 16th are // ignored. AspectRatios []float32 `protobuf:"fixed32,1,rep,packed,name=aspect_ratios,json=aspectRatios,proto3" json:"aspect_ratios,omitempty"` // contains filtered or unexported fields } Parameters for crop hints annotation request. func (*CropHintsParams) Descriptor Uses func (*CropHintsParams) Descriptor() ([]byte, []int) Deprecated: Use CropHintsParams.ProtoReflect.Descriptor instead. func (*CropHintsParams) GetAspectRatios Uses func (x *CropHintsParams) GetAspectRatios() []float32 func (*CropHintsParams) ProtoMessage Uses func (*CropHintsParams) ProtoMessage() func (*CropHintsParams) ProtoReflect Uses func (x *CropHintsParams) ProtoReflect() protoreflect.Message func (*CropHintsParams) Reset Uses func (x *CropHintsParams) Reset() func (*CropHintsParams) String Uses func (x *CropHintsParams) String() string type DominantColorsAnnotation Uses type DominantColorsAnnotation struct { // RGB color values with their score and pixel fraction. Colors []*ColorInfo `protobuf:"bytes,1,rep,name=colors,proto3" json:"colors,omitempty"` // contains filtered or unexported fields } Set of dominant colors and their corresponding scores. func (*DominantColorsAnnotation) Descriptor Uses func (*DominantColorsAnnotation) Descriptor() ([]byte, []int) Deprecated: Use DominantColorsAnnotation.ProtoReflect.Descriptor instead. func (*DominantColorsAnnotation) GetColors Uses func (x *DominantColorsAnnotation) GetColors() []*ColorInfo func (*DominantColorsAnnotation) ProtoMessage Uses func (*DominantColorsAnnotation) ProtoMessage() func (*DominantColorsAnnotation) ProtoReflect Uses func (x *DominantColorsAnnotation) ProtoReflect() protoreflect.Message func (*DominantColorsAnnotation) Reset Uses func (x *DominantColorsAnnotation) Reset() func (*DominantColorsAnnotation) String Uses func (x *DominantColorsAnnotation) String() string type EntityAnnotation Uses type EntityAnnotation struct { // Opaque entity ID. Some IDs may be available in // [Google Knowledge Graph Search // API](https://developers.google.com/knowledge-graph/). Mid string `protobuf:"bytes,1,opt,name=mid,proto3" json:"mid,omitempty"` // The language code for the locale in which the entity textual // `description` is expressed. Locale string `protobuf:"bytes,2,opt,name=locale,proto3" json:"locale,omitempty"` // Entity textual description, expressed in its `locale` language. Description string `protobuf:"bytes,3,opt,name=description,proto3" json:"description,omitempty"` // Overall score of the result. Range [0, 1]. Score float32 `protobuf:"fixed32,4,opt,name=score,proto3" json:"score,omitempty"` // The accuracy of the entity detection in an image. // For example, for an image in which the "Eiffel Tower" entity is detected, // this field represents the confidence that there is a tower in the query // image. Range [0, 1]. Confidence float32 `protobuf:"fixed32,5,opt,name=confidence,proto3" json:"confidence,omitempty"` // The relevancy of the ICA (Image Content Annotation) label to the // image. For example, the relevancy of "tower" is likely higher to an image // containing the detected "Eiffel Tower" than to an image containing a // detected distant towering building, even though the confidence that // there is a tower in each image may be the same. Range [0, 1]. Topicality float32 `protobuf:"fixed32,6,opt,name=topicality,proto3" json:"topicality,omitempty"` // Image region to which this entity belongs. Not produced // for `LABEL_DETECTION` features. BoundingPoly *BoundingPoly `protobuf:"bytes,7,opt,name=bounding_poly,json=boundingPoly,proto3" json:"bounding_poly,omitempty"` // The location information for the detected entity. Multiple // `LocationInfo` elements can be present because one location may // indicate the location of the scene in the image, and another location // may indicate the location of the place where the image was taken. // Location information is usually present for landmarks. Locations []*LocationInfo `protobuf:"bytes,8,rep,name=locations,proto3" json:"locations,omitempty"` // Some entities may have optional user-supplied `Property` (name/value) // fields, such a score or string that qualifies the entity. Properties []*Property `protobuf:"bytes,9,rep,name=properties,proto3" json:"properties,omitempty"` // contains filtered or unexported fields } Set of detected entity features. func (*EntityAnnotation) Descriptor Uses func (*EntityAnnotation) Descriptor() ([]byte, []int) Deprecated: Use EntityAnnotation.ProtoReflect.Descriptor instead. func (*EntityAnnotation) GetBoundingPoly Uses func (x *EntityAnnotation) GetBoundingPoly() *BoundingPoly func (*EntityAnnotation) GetConfidence Uses func (x *EntityAnnotation) GetConfidence() float32 func (*EntityAnnotation) GetDescription Uses func (x *EntityAnnotation) GetDescription() string func (*EntityAnnotation) GetLocale Uses func (x *EntityAnnotation) GetLocale() string func (*EntityAnnotation) GetLocations Uses func (x *EntityAnnotation) GetLocations() []*LocationInfo func (*EntityAnnotation) GetMid Uses func (x *EntityAnnotation) GetMid() string func (*EntityAnnotation) GetProperties Uses func (x *EntityAnnotation) GetProperties() []*Property func (*EntityAnnotation) GetScore Uses func (x *EntityAnnotation) GetScore() float32 func (*EntityAnnotation) GetTopicality Uses func (x *EntityAnnotation) GetTopicality() float32 func (*EntityAnnotation) ProtoMessage Uses func (*EntityAnnotation) ProtoMessage() func (*EntityAnnotation) ProtoReflect Uses func (x *EntityAnnotation) ProtoReflect() protoreflect.Message func (*EntityAnnotation) Reset Uses func (x *EntityAnnotation) Reset() func (*EntityAnnotation) String Uses func (x *EntityAnnotation) String() string type FaceAnnotation Uses type FaceAnnotation struct { // The bounding polygon around the face. The coordinates of the bounding box // are in the original image's scale, as returned in `ImageParams`. // The bounding box is computed to "frame" the face in accordance with human // expectations. It is based on the landmarker results. // Note that one or more x and/or y coordinates may not be generated in the // `BoundingPoly` (the polygon will be unbounded) if only a partial face // appears in the image to be annotated. BoundingPoly *BoundingPoly `protobuf:"bytes,1,opt,name=bounding_poly,json=boundingPoly,proto3" json:"bounding_poly,omitempty"` // The `fd_bounding_poly` bounding polygon is tighter than the // `boundingPoly`, and encloses only the skin part of the face. Typically, it // is used to eliminate the face from any image analysis that detects the // "amount of skin" visible in an image. It is not based on the // landmarker results, only on the initial face detection, hence // the <code>fd</code> (face detection) prefix. FdBoundingPoly *BoundingPoly `protobuf:"bytes,2,opt,name=fd_bounding_poly,json=fdBoundingPoly,proto3" json:"fd_bounding_poly,omitempty"` // Detected face landmarks. Landmarks []*FaceAnnotation_Landmark `protobuf:"bytes,3,rep,name=landmarks,proto3" json:"landmarks,omitempty"` // Roll angle, which indicates the amount of clockwise/anti-clockwise rotation // of the face relative to the image vertical about the axis perpendicular to // the face. Range [-180,180]. RollAngle float32 `protobuf:"fixed32,4,opt,name=roll_angle,json=rollAngle,proto3" json:"roll_angle,omitempty"` // Yaw angle, which indicates the leftward/rightward angle that the face is // pointing relative to the vertical plane perpendicular to the image. Range // [-180,180]. PanAngle float32 `protobuf:"fixed32,5,opt,name=pan_angle,json=panAngle,proto3" json:"pan_angle,omitempty"` // Pitch angle, which indicates the upwards/downwards angle that the face is // pointing relative to the image's horizontal plane. Range [-180,180]. TiltAngle float32 `protobuf:"fixed32,6,opt,name=tilt_angle,json=tiltAngle,proto3" json:"tilt_angle,omitempty"` // Detection confidence. Range [0, 1]. DetectionConfidence float32 `protobuf:"fixed32,7,opt,name=detection_confidence,json=detectionConfidence,proto3" json:"detection_confidence,omitempty"` // Face landmarking confidence. Range [0, 1]. LandmarkingConfidence float32 `protobuf:"fixed32,8,opt,name=landmarking_confidence,json=landmarkingConfidence,proto3" json:"landmarking_confidence,omitempty"` // Joy likelihood. JoyLikelihood Likelihood `protobuf:"varint,9,opt,name=joy_likelihood,json=joyLikelihood,proto3,enum=google.cloud.vision.v1p1beta1.Likelihood" json:"joy_likelihood,omitempty"` // Sorrow likelihood. SorrowLikelihood Likelihood `protobuf:"varint,10,opt,name=sorrow_likelihood,json=sorrowLikelihood,proto3,enum=google.cloud.vision.v1p1beta1.Likelihood" json:"sorrow_likelihood,omitempty"` // Anger likelihood. AngerLikelihood Likelihood `protobuf:"varint,11,opt,name=anger_likelihood,json=angerLikelihood,proto3,enum=google.cloud.vision.v1p1beta1.Likelihood" json:"anger_likelihood,omitempty"` // Surprise likelihood. SurpriseLikelihood Likelihood `protobuf:"varint,12,opt,name=surprise_likelihood,json=surpriseLikelihood,proto3,enum=google.cloud.vision.v1p1beta1.Likelihood" json:"surprise_likelihood,omitempty"` // Under-exposed likelihood. UnderExposedLikelihood Likelihood `protobuf:"varint,13,opt,name=under_exposed_likelihood,json=underExposedLikelihood,proto3,enum=google.cloud.vision.v1p1beta1.Likelihood" json:"under_exposed_likelihood,omitempty"` // Blurred likelihood. BlurredLikelihood Likelihood `protobuf:"varint,14,opt,name=blurred_likelihood,json=blurredLikelihood,proto3,enum=google.cloud.vision.v1p1beta1.Likelihood" json:"blurred_likelihood,omitempty"` // Headwear likelihood. HeadwearLikelihood Likelihood `protobuf:"varint,15,opt,name=headwear_likelihood,json=headwearLikelihood,proto3,enum=google.cloud.vision.v1p1beta1.Likelihood" json:"headwear_likelihood,omitempty"` // contains filtered or unexported fields } A face annotation object contains the results of face detection. func (*FaceAnnotation) Descriptor Uses func (*FaceAnnotation) Descriptor() ([]byte, []int) Deprecated: Use FaceAnnotation.ProtoReflect.Descriptor instead. func (*FaceAnnotation) GetAngerLikelihood Uses func (x *FaceAnnotation) GetAngerLikelihood() Likelihood func (*FaceAnnotation) GetBlurredLikelihood Uses func (x *FaceAnnotation) GetBlurredLikelihood() Likelihood func (*FaceAnnotation) GetBoundingPoly Uses func (x *FaceAnnotation) GetBoundingPoly() *BoundingPoly func (*FaceAnnotation) GetDetectionConfidence Uses func (x *FaceAnnotation) GetDetectionConfidence() float32 func (*FaceAnnotation) GetFdBoundingPoly Uses func (x *FaceAnnotation) GetFdBoundingPoly() *BoundingPoly func (*FaceAnnotation) GetHeadwearLikelihood Uses func (x *FaceAnnotation) GetHeadwearLikelihood() Likelihood func (*FaceAnnotation) GetJoyLikelihood Uses func (x *FaceAnnotation) GetJoyLikelihood() Likelihood func (*FaceAnnotation) GetLandmarkingConfidence Uses func (x *FaceAnnotation) GetLandmarkingConfidence() float32 func (*FaceAnnotation) GetLandmarks Uses func (x *FaceAnnotation) GetLandmarks() []*FaceAnnotation_Landmark func (*FaceAnnotation) GetPanAngle Uses func (x *FaceAnnotation) GetPanAngle() float32 func (*FaceAnnotation) GetRollAngle Uses func (x *FaceAnnotation) GetRollAngle() float32 func (*FaceAnnotation) GetSorrowLikelihood Uses func (x *FaceAnnotation) GetSorrowLikelihood() Likelihood func (*FaceAnnotation) GetSurpriseLikelihood Uses func (x *FaceAnnotation) GetSurpriseLikelihood() Likelihood func (*FaceAnnotation) GetTiltAngle Uses func (x *FaceAnnotation) GetTiltAngle() float32 func (*FaceAnnotation) GetUnderExposedLikelihood Uses func (x *FaceAnnotation) GetUnderExposedLikelihood() Likelihood func (*FaceAnnotation) ProtoMessage Uses func (*FaceAnnotation) ProtoMessage() func (*FaceAnnotation) ProtoReflect Uses func (x *FaceAnnotation) ProtoReflect() protoreflect.Message func (*FaceAnnotation) Reset Uses func (x *FaceAnnotation) Reset() func (*FaceAnnotation) String Uses func (x *FaceAnnotation) String() string type FaceAnnotation_Landmark Uses type FaceAnnotation_Landmark struct { // Face landmark type. Type FaceAnnotation_Landmark_Type `protobuf:"varint,3,opt,name=type,proto3,enum=google.cloud.vision.v1p1beta1.FaceAnnotation_Landmark_Type" json:"type,omitempty"` // Face landmark position. Position *Position `protobuf:"bytes,4,opt,name=position,proto3" json:"position,omitempty"` // contains filtered or unexported fields } A face-specific landmark (for example, a face feature). func (*FaceAnnotation_Landmark) Descriptor Uses func (*FaceAnnotation_Landmark) Descriptor() ([]byte, []int) Deprecated: Use FaceAnnotation_Landmark.ProtoReflect.Descriptor instead. func (*FaceAnnotation_Landmark) GetPosition Uses func (x *FaceAnnotation_Landmark) GetPosition() *Position func (*FaceAnnotation_Landmark) GetType Uses func (x *FaceAnnotation_Landmark) GetType() FaceAnnotation_Landmark_Type func (*FaceAnnotation_Landmark) ProtoMessage Uses func (*FaceAnnotation_Landmark) ProtoMessage() func (*FaceAnnotation_Landmark) ProtoReflect Uses func (x *FaceAnnotation_Landmark) ProtoReflect() protoreflect.Message func (*FaceAnnotation_Landmark) Reset Uses func (x *FaceAnnotation_Landmark) Reset() func (*FaceAnnotation_Landmark) String Uses func (x *FaceAnnotation_Landmark) String() string type FaceAnnotation_Landmark_Type Uses type FaceAnnotation_Landmark_Type int32 Face landmark (feature) type. Left and right are defined from the vantage of the viewer of the image without considering mirror projections typical of photos. So, `LEFT_EYE`, typically, is the person's right eye. const ( // Unknown face landmark detected. Should not be filled. FaceAnnotation_Landmark_UNKNOWN_LANDMARK FaceAnnotation_Landmark_Type = 0 // Left eye. FaceAnnotation_Landmark_LEFT_EYE FaceAnnotation_Landmark_Type = 1 // Right eye. FaceAnnotation_Landmark_RIGHT_EYE FaceAnnotation_Landmark_Type = 2 // Left of left eyebrow. FaceAnnotation_Landmark_LEFT_OF_LEFT_EYEBROW FaceAnnotation_Landmark_Type = 3 // Right of left eyebrow. FaceAnnotation_Landmark_RIGHT_OF_LEFT_EYEBROW FaceAnnotation_Landmark_Type = 4 // Left of right eyebrow. FaceAnnotation_Landmark_LEFT_OF_RIGHT_EYEBROW FaceAnnotation_Landmark_Type = 5 // Right of right eyebrow. FaceAnnotation_Landmark_RIGHT_OF_RIGHT_EYEBROW FaceAnnotation_Landmark_Type = 6 // Midpoint between eyes. FaceAnnotation_Landmark_MIDPOINT_BETWEEN_EYES FaceAnnotation_Landmark_Type = 7 // Nose tip. FaceAnnotation_Landmark_NOSE_TIP FaceAnnotation_Landmark_Type = 8 // Upper lip. FaceAnnotation_Landmark_UPPER_LIP FaceAnnotation_Landmark_Type = 9 // Lower lip. FaceAnnotation_Landmark_LOWER_LIP FaceAnnotation_Landmark_Type = 10 // Mouth left. FaceAnnotation_Landmark_MOUTH_LEFT FaceAnnotation_Landmark_Type = 11 // Mouth right. FaceAnnotation_Landmark_MOUTH_RIGHT FaceAnnotation_Landmark_Type = 12 // Mouth center. FaceAnnotation_Landmark_MOUTH_CENTER FaceAnnotation_Landmark_Type = 13 // Nose, bottom right. FaceAnnotation_Landmark_NOSE_BOTTOM_RIGHT FaceAnnotation_Landmark_Type = 14 // Nose, bottom left. FaceAnnotation_Landmark_NOSE_BOTTOM_LEFT FaceAnnotation_Landmark_Type = 15 // Nose, bottom center. FaceAnnotation_Landmark_NOSE_BOTTOM_CENTER FaceAnnotation_Landmark_Type = 16 // Left eye, top boundary. FaceAnnotation_Landmark_LEFT_EYE_TOP_BOUNDARY FaceAnnotation_Landmark_Type = 17 // Left eye, right corner. FaceAnnotation_Landmark_LEFT_EYE_RIGHT_CORNER FaceAnnotation_Landmark_Type = 18 // Left eye, bottom boundary. FaceAnnotation_Landmark_LEFT_EYE_BOTTOM_BOUNDARY FaceAnnotation_Landmark_Type = 19 // Left eye, left corner. FaceAnnotation_Landmark_LEFT_EYE_LEFT_CORNER FaceAnnotation_Landmark_Type = 20 // Right eye, top boundary. FaceAnnotation_Landmark_RIGHT_EYE_TOP_BOUNDARY FaceAnnotation_Landmark_Type = 21 // Right eye, right corner. FaceAnnotation_Landmark_RIGHT_EYE_RIGHT_CORNER FaceAnnotation_Landmark_Type = 22 // Right eye, bottom boundary. FaceAnnotation_Landmark_RIGHT_EYE_BOTTOM_BOUNDARY FaceAnnotation_Landmark_Type = 23 // Right eye, left corner. FaceAnnotation_Landmark_RIGHT_EYE_LEFT_CORNER FaceAnnotation_Landmark_Type = 24 // Left eyebrow, upper midpoint. FaceAnnotation_Landmark_LEFT_EYEBROW_UPPER_MIDPOINT FaceAnnotation_Landmark_Type = 25 // Right eyebrow, upper midpoint. FaceAnnotation_Landmark_RIGHT_EYEBROW_UPPER_MIDPOINT FaceAnnotation_Landmark_Type = 26 // Left ear tragion. FaceAnnotation_Landmark_LEFT_EAR_TRAGION FaceAnnotation_Landmark_Type = 27 // Right ear tragion. FaceAnnotation_Landmark_RIGHT_EAR_TRAGION FaceAnnotation_Landmark_Type = 28 // Left eye pupil. FaceAnnotation_Landmark_LEFT_EYE_PUPIL FaceAnnotation_Landmark_Type = 29 // Right eye pupil. FaceAnnotation_Landmark_RIGHT_EYE_PUPIL FaceAnnotation_Landmark_Type = 30 // Forehead glabella. FaceAnnotation_Landmark_FOREHEAD_GLABELLA FaceAnnotation_Landmark_Type = 31 // Chin gnathion. FaceAnnotation_Landmark_CHIN_GNATHION FaceAnnotation_Landmark_Type = 32 // Chin left gonion. FaceAnnotation_Landmark_CHIN_LEFT_GONION FaceAnnotation_Landmark_Type = 33 // Chin right gonion. FaceAnnotation_Landmark_CHIN_RIGHT_GONION FaceAnnotation_Landmark_Type = 34 ) func (FaceAnnotation_Landmark_Type) Descriptor Uses func (FaceAnnotation_Landmark_Type) Descriptor() protoreflect.EnumDescriptor func (FaceAnnotation_Landmark_Type) Enum Uses func (x FaceAnnotation_Landmark_Type) Enum() *FaceAnnotation_Landmark_Type func (FaceAnnotation_Landmark_Type) EnumDescriptor Uses func (FaceAnnotation_Landmark_Type) EnumDescriptor() ([]byte, []int) Deprecated: Use FaceAnnotation_Landmark_Type.Descriptor instead. func (FaceAnnotation_Landmark_Type) Number Uses func (x FaceAnnotation_Landmark_Type) Number() protoreflect.EnumNumber func (FaceAnnotation_Landmark_Type) String Uses func (x FaceAnnotation_Landmark_Type) String() string func (FaceAnnotation_Landmark_Type) Type Uses func (FaceAnnotation_Landmark_Type) Type() protoreflect.EnumType type Feature Uses type Feature struct { // The feature type. Type Feature_Type `protobuf:"varint,1,opt,name=type,proto3,enum=google.cloud.vision.v1p1beta1.Feature_Type" json:"type,omitempty"` // Maximum number of results of this type. MaxResults int32 `protobuf:"varint,2,opt,name=max_results,json=maxResults,proto3" json:"max_results,omitempty"` // Model to use for the feature. // Supported values: "builtin/stable" (the default if unset) and // "builtin/latest". Model string `protobuf:"bytes,3,opt,name=model,proto3" json:"model,omitempty"` // contains filtered or unexported fields } Users describe the type of Google Cloud Vision API tasks to perform over images by using *Feature*s. Each Feature indicates a type of image detection task to perform. Features encode the Cloud Vision API vertical to operate on and the number of top-scoring results to return. func (*Feature) Descriptor Uses func (*Feature) Descriptor() ([]byte, []int) Deprecated: Use Feature.ProtoReflect.Descriptor instead. func (*Feature) GetMaxResults Uses func (x *Feature) GetMaxResults() int32 func (*Feature) GetModel Uses func (x *Feature) GetModel() string func (*Feature) GetType Uses func (x *Feature) GetType() Feature_Type func (*Feature) ProtoMessage Uses func (*Feature) ProtoMessage() func (*Feature) ProtoReflect Uses func (x *Feature) ProtoReflect() protoreflect.Message func (*Feature) Reset Uses func (x *Feature) Reset() func (*Feature) String Uses func (x *Feature) String() string type Feature_Type Uses type Feature_Type int32 Type of image feature. const ( // Unspecified feature type. Feature_TYPE_UNSPECIFIED Feature_Type = 0 // Run face detection. Feature_FACE_DETECTION Feature_Type = 1 // Run landmark detection. Feature_LANDMARK_DETECTION Feature_Type = 2 // Run logo detection. Feature_LOGO_DETECTION Feature_Type = 3 // Run label detection. Feature_LABEL_DETECTION Feature_Type = 4 // Run OCR. Feature_TEXT_DETECTION Feature_Type = 5 // Run dense text document OCR. Takes precedence when both // DOCUMENT_TEXT_DETECTION and TEXT_DETECTION are present. Feature_DOCUMENT_TEXT_DETECTION Feature_Type = 11 // Run computer vision models to compute image safe-search properties. Feature_SAFE_SEARCH_DETECTION Feature_Type = 6 // Compute a set of image properties, such as the image's dominant colors. Feature_IMAGE_PROPERTIES Feature_Type = 7 // Run crop hints. Feature_CROP_HINTS Feature_Type = 9 // Run web detection. Feature_WEB_DETECTION Feature_Type = 10 ) func (Feature_Type) Descriptor Uses func (Feature_Type) Descriptor() protoreflect.EnumDescriptor func (Feature_Type) Enum Uses func (x Feature_Type) Enum() *Feature_Type func (Feature_Type) EnumDescriptor Uses func (Feature_Type) EnumDescriptor() ([]byte, []int) Deprecated: Use Feature_Type.Descriptor instead. func (Feature_Type) Number Uses func (x Feature_Type) Number() protoreflect.EnumNumber func (Feature_Type) String Uses func (x Feature_Type) String() string func (Feature_Type) Type Uses func (Feature_Type) Type() protoreflect.EnumType type Image Uses type Image struct { // Image content, represented as a stream of bytes. // Note: as with all `bytes` fields, protobuffers use a pure binary // representation, whereas JSON representations use base64. Content []byte `protobuf:"bytes,1,opt,name=content,proto3" json:"content,omitempty"` // Google Cloud Storage image location. If both `content` and `source` // are provided for an image, `content` takes precedence and is // used to perform the image annotation request. Source *ImageSource `protobuf:"bytes,2,opt,name=source,proto3" json:"source,omitempty"` // contains filtered or unexported fields } Client image to perform Google Cloud Vision API tasks over. func (*Image) Descriptor Uses func (*Image) Descriptor() ([]byte, []int) Deprecated: Use Image.ProtoReflect.Descriptor instead. func (*Image) GetContent Uses func (x *Image) GetContent() []byte func (*Image) GetSource Uses func (x *Image) GetSource() *ImageSource func (*Image) ProtoMessage Uses func (*Image) ProtoMessage() func (*Image) ProtoReflect Uses func (x *Image) ProtoReflect() protoreflect.Message func (*Image) Reset Uses func (x *Image) Reset() func (*Image) String Uses func (x *Image) String() string type ImageAnnotatorClient Uses type ImageAnnotatorClient interface { // Run image detection and annotation for a batch of images. BatchAnnotateImages(ctx context.Context, in *BatchAnnotateImagesRequest, opts ...grpc.CallOption) (*BatchAnnotateImagesResponse, error) } ImageAnnotatorClient is the client API for ImageAnnotator service. For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. func NewImageAnnotatorClient Uses func NewImageAnnotatorClient(cc grpc.ClientConnInterface) ImageAnnotatorClient type ImageAnnotatorServer Uses type ImageAnnotatorServer interface { // Run image detection and annotation for a batch of images. BatchAnnotateImages(context.Context, *BatchAnnotateImagesRequest) (*BatchAnnotateImagesResponse, error) } ImageAnnotatorServer is the server API for ImageAnnotator service. type ImageContext Uses type ImageContext struct { // lat/long rectangle that specifies the location of the image. LatLongRect *LatLongRect `protobuf:"bytes,1,opt,name=lat_long_rect,json=latLongRect,proto3" json:"lat_long_rect,omitempty"` // List of languages to use for TEXT_DETECTION. In most cases, an empty value // yields the best results since it enables automatic language detection. For // languages based on the Latin alphabet, setting `language_hints` is not // needed. In rare cases, when the language of the text in the image is known, // setting a hint will help get better results (although it will be a // significant hindrance if the hint is wrong). Text detection returns an // error if one or more of the specified languages is not one of the // [supported languages](https://cloud.google.com/vision/docs/languages). LanguageHints []string `protobuf:"bytes,2,rep,name=language_hints,json=languageHints,proto3" json:"language_hints,omitempty"` // Parameters for crop hints annotation request. CropHintsParams *CropHintsParams `protobuf:"bytes,4,opt,name=crop_hints_params,json=cropHintsParams,proto3" json:"crop_hints_params,omitempty"` // Parameters for web detection. WebDetectionParams *WebDetectionParams `protobuf:"bytes,6,opt,name=web_detection_params,json=webDetectionParams,proto3" json:"web_detection_params,omitempty"` // Parameters for text detection and document text detection. TextDetectionParams *TextDetectionParams `protobuf:"bytes,12,opt,name=text_detection_params,json=textDetectionParams,proto3" json:"text_detection_params,omitempty"` // contains filtered or unexported fields } Image context and/or feature-specific parameters. func (*ImageContext) Descriptor Uses func (*ImageContext) Descriptor() ([]byte, []int) Deprecated: Use ImageContext.ProtoReflect.Descriptor instead. func (*ImageContext) GetCropHintsParams Uses func (x *ImageContext) GetCropHintsParams() *CropHintsParams func (*ImageContext) GetLanguageHints Uses func (x *ImageContext) GetLanguageHints() []string func (*ImageContext) GetLatLongRect Uses func (x *ImageContext) GetLatLongRect() *LatLongRect func (*ImageContext) GetTextDetectionParams Uses func (x *ImageContext) GetTextDetectionParams() *TextDetectionParams func (*ImageContext) GetWebDetectionParams Uses func (x *ImageContext) GetWebDetectionParams() *WebDetectionParams func (*ImageContext) ProtoMessage Uses func (*ImageContext) ProtoMessage() func (*ImageContext) ProtoReflect Uses func (x *ImageContext) ProtoReflect() protoreflect.Message func (*ImageContext) Reset Uses func (x *ImageContext) Reset() func (*ImageContext) String Uses func (x *ImageContext) String() string type ImageProperties Uses type ImageProperties struct { // If present, dominant colors completed successfully. DominantColors *DominantColorsAnnotation `protobuf:"bytes,1,opt,name=dominant_colors,json=dominantColors,proto3" json:"dominant_colors,omitempty"` // contains filtered or unexported fields } Stores image properties, such as dominant colors. func (*ImageProperties) Descriptor Uses func (*ImageProperties) Descriptor() ([]byte, []int) Deprecated: Use ImageProperties.ProtoReflect.Descriptor instead. func (*ImageProperties) GetDominantColors Uses func (x *ImageProperties) GetDominantColors() *DominantColorsAnnotation func (*ImageProperties) ProtoMessage Uses func (*ImageProperties) ProtoMessage() func (*ImageProperties) ProtoReflect Uses func (x *ImageProperties) ProtoReflect() protoreflect.Message func (*ImageProperties) Reset Uses func (x *ImageProperties) Reset() func (*ImageProperties) String Uses func (x *ImageProperties) String() string type ImageSource Uses type ImageSource struct { // NOTE: For new code `image_uri` below is preferred. // Google Cloud Storage image URI, which must be in the following form: // `gs://bucket_name/object_name` (for details, see // [Google Cloud Storage Request // URIs](https://cloud.google.com/storage/docs/reference-uris)). // NOTE: Cloud Storage object versioning is not supported. GcsImageUri string `protobuf:"bytes,1,opt,name=gcs_image_uri,json=gcsImageUri,proto3" json:"gcs_image_uri,omitempty"` // Image URI which supports: // 1) Google Cloud Storage image URI, which must be in the following form: // `gs://bucket_name/object_name` (for details, see // [Google Cloud Storage Request // URIs](https://cloud.google.com/storage/docs/reference-uris)). // NOTE: Cloud Storage object versioning is not supported. // 2) Publicly accessible image HTTP/HTTPS URL. // This is preferred over the legacy `gcs_image_uri` above. When both // `gcs_image_uri` and `image_uri` are specified, `image_uri` takes // precedence. ImageUri string `protobuf:"bytes,2,opt,name=image_uri,json=imageUri,proto3" json:"image_uri,omitempty"` // contains filtered or unexported fields } External image source (Google Cloud Storage image location). func (*ImageSource) Descriptor Uses func (*ImageSource) Descriptor() ([]byte, []int) Deprecated: Use ImageSource.ProtoReflect.Descriptor instead. func (*ImageSource) GetGcsImageUri Uses func (x *ImageSource) GetGcsImageUri() string func (*ImageSource) GetImageUri Uses func (x *ImageSource) GetImageUri() string func (*ImageSource) ProtoMessage Uses func (*ImageSource) ProtoMessage() func (*ImageSource) ProtoReflect Uses func (x *ImageSource) ProtoReflect() protoreflect.Message func (*ImageSource) Reset Uses func (x *ImageSource) Reset() func (*ImageSource) String Uses func (x *ImageSource) String() string type LatLongRect Uses type LatLongRect struct { // Min lat/long pair. MinLatLng *latlng.LatLng `protobuf:"bytes,1,opt,name=min_lat_lng,json=minLatLng,proto3" json:"min_lat_lng,omitempty"` // Max lat/long pair. MaxLatLng *latlng.LatLng `protobuf:"bytes,2,opt,name=max_lat_lng,json=maxLatLng,proto3" json:"max_lat_lng,omitempty"` // contains filtered or unexported fields } Rectangle determined by min and max `LatLng` pairs. func (*LatLongRect) Descriptor Uses func (*LatLongRect) Descriptor() ([]byte, []int) Deprecated: Use LatLongRect.ProtoReflect.Descriptor instead. func (*LatLongRect) GetMaxLatLng Uses func (x *LatLongRect) GetMaxLatLng() *latlng.LatLng func (*LatLongRect) GetMinLatLng Uses func (x *LatLongRect) GetMinLatLng() *latlng.LatLng func (*LatLongRect) ProtoMessage Uses func (*LatLongRect) ProtoMessage() func (*LatLongRect) ProtoReflect Uses func (x *LatLongRect) ProtoReflect() protoreflect.Message func (*LatLongRect) Reset Uses func (x *LatLongRect) Reset() func (*LatLongRect) String Uses func (x *LatLongRect) String() string type Likelihood Uses type Likelihood int32 A bucketized representation of likelihood, which is intended to give clients highly stable results across model upgrades. const ( // Unknown likelihood. Likelihood_UNKNOWN Likelihood = 0 // It is very unlikely that the image belongs to the specified vertical. Likelihood_VERY_UNLIKELY Likelihood = 1 // It is unlikely that the image belongs to the specified vertical. Likelihood_UNLIKELY Likelihood = 2 // It is possible that the image belongs to the specified vertical. Likelihood_POSSIBLE Likelihood = 3 // It is likely that the image belongs to the specified vertical. Likelihood_LIKELY Likelihood = 4 // It is very likely that the image belongs to the specified vertical. Likelihood_VERY_LIKELY Likelihood = 5 ) func (Likelihood) Descriptor Uses func (Likelihood) Descriptor() protoreflect.EnumDescriptor func (Likelihood) Enum Uses func (x Likelihood) Enum() *Likelihood func (Likelihood) EnumDescriptor Uses func (Likelihood) EnumDescriptor() ([]byte, []int) Deprecated: Use Likelihood.Descriptor instead. func (Likelihood) Number Uses func (x Likelihood) Number() protoreflect.EnumNumber func (Likelihood) String Uses func (x Likelihood) String() string func (Likelihood) Type Uses func (Likelihood) Type() protoreflect.EnumType type LocationInfo Uses type LocationInfo struct { // lat/long location coordinates. LatLng *latlng.LatLng `protobuf:"bytes,1,opt,name=lat_lng,json=latLng,proto3" json:"lat_lng,omitempty"` // contains filtered or unexported fields } Detected entity location information. func (*LocationInfo) Descriptor Uses func (*LocationInfo) Descriptor() ([]byte, []int) Deprecated: Use LocationInfo.ProtoReflect.Descriptor instead. func (*LocationInfo) GetLatLng Uses func (x *LocationInfo) GetLatLng() *latlng.LatLng func (*LocationInfo) ProtoMessage Uses func (*LocationInfo) ProtoMessage() func (*LocationInfo) ProtoReflect Uses func (x *LocationInfo) ProtoReflect() protoreflect.Message func (*LocationInfo) Reset Uses func (x *LocationInfo) Reset() func (*LocationInfo) String Uses func (x *LocationInfo) String() string type Page Uses type Page struct { // Additional information detected on the page. Property *TextAnnotation_TextProperty `protobuf:"bytes,1,opt,name=property,proto3" json:"property,omitempty"` // Page width in pixels. Width int32 `protobuf:"varint,2,opt,name=width,proto3" json:"width,omitempty"` // Page height in pixels. Height int32 `protobuf:"varint,3,opt,name=height,proto3" json:"height,omitempty"` // List of blocks of text, images etc on this page. Blocks []*Block `protobuf:"bytes,4,rep,name=blocks,proto3" json:"blocks,omitempty"` // Confidence of the OCR results on the page. Range [0, 1]. Confidence float32 `protobuf:"fixed32,5,opt,name=confidence,proto3" json:"confidence,omitempty"` // contains filtered or unexported fields } Detected page from OCR. func (*Page) Descriptor Uses func (*Page) Descriptor() ([]byte, []int) Deprecated: Use Page.ProtoReflect.Descriptor instead. func (*Page) GetBlocks Uses func (x *Page) GetBlocks() []*Block func (*Page) GetConfidence Uses func (x *Page) GetConfidence() float32 func (*Page) GetHeight Uses func (x *Page) GetHeight() int32 func (*Page) GetProperty Uses func (x *Page) GetProperty() *TextAnnotation_TextProperty func (*Page) GetWidth Uses func (x *Page) GetWidth() int32 func (*Page) ProtoMessage Uses func (*Page) ProtoMessage() func (*Page) ProtoReflect Uses func (x *Page) ProtoReflect() protoreflect.Message func (*Page) Reset Uses func (x *Page) Reset() func (*Page) String Uses func (x *Page) String() string type Paragraph Uses type Paragraph struct { // Additional information detected for the paragraph. Property *TextAnnotation_TextProperty `protobuf:"bytes,1,opt,name=property,proto3" json:"property,omitempty"` // The bounding box for the paragraph. // The vertices are in the order of top-left, top-right, bottom-right, // bottom-left. When a rotation of the bounding box is detected the rotation // is represented as around the top-left corner as defined when the text is // read in the 'natural' orientation. // For example: // * when the text is horizontal it might look like: // 0----1 // | | // 3----2 // * when it's rotated 180 degrees around the top-left corner it becomes: // 2----3 // | | // 1----0 // and the vertice order will still be (0, 1, 2, 3). BoundingBox *BoundingPoly `protobuf:"bytes,2,opt,name=bounding_box,json=boundingBox,proto3" json:"bounding_box,omitempty"` // List of words in this paragraph. Words []*Word `protobuf:"bytes,3,rep,name=words,proto3" json:"words,omitempty"` // Confidence of the OCR results for the paragraph. Range [0, 1]. Confidence float32 `protobuf:"fixed32,4,opt,name=confidence,proto3" json:"confidence,omitempty"` // contains filtered or unexported fields } Structural unit of text representing a number of words in certain order. func (*Paragraph) Descriptor Uses func (*Paragraph) Descriptor() ([]byte, []int) Deprecated: Use Paragraph.ProtoReflect.Descriptor instead. func (*Paragraph) GetBoundingBox Uses func (x *Paragraph) GetBoundingBox() *BoundingPoly func (*Paragraph) GetConfidence Uses func (x *Paragraph) GetConfidence() float32 func (*Paragraph) GetProperty Uses func (x *Paragraph) GetProperty() *TextAnnotation_TextProperty func (*Paragraph) GetWords Uses func (x *Paragraph) GetWords() []*Word func (*Paragraph) ProtoMessage Uses func (*Paragraph) ProtoMessage() func (*Paragraph) ProtoReflect Uses func (x *Paragraph) ProtoReflect() protoreflect.Message func (*Paragraph) Reset Uses func (x *Paragraph) Reset() func (*Paragraph) String Uses func (x *Paragraph) String() string type Position Uses type Position struct { // X coordinate. X float32 `protobuf:"fixed32,1,opt,name=x,proto3" json:"x,omitempty"` // Y coordinate. Y float32 `protobuf:"fixed32,2,opt,name=y,proto3" json:"y,omitempty"` // Z coordinate (or depth). Z float32 `protobuf:"fixed32,3,opt,name=z,proto3" json:"z,omitempty"` // contains filtered or unexported fields } A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image. func (*Position) Descriptor Uses func (*Position) Descriptor() ([]byte, []int) Deprecated: Use Position.ProtoReflect.Descriptor instead. func (*Position) GetX Uses func (x *Position) GetX() float32 func (*Position) GetY Uses func (x *Position) GetY() float32 func (*Position) GetZ Uses func (x *Position) GetZ() float32 func (*Position) ProtoMessage Uses func (*Position) ProtoMessage() func (*Position) ProtoReflect Uses func (x *Position) ProtoReflect() protoreflect.Message func (*Position) Reset Uses func (x *Position) Reset() func (*Position) String Uses func (x *Position) String() string type Property Uses type Property struct { // Name of the property. Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // Value of the property. Value string `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"` // Value of numeric properties. Uint64Value uint64 `protobuf:"varint,3,opt,name=uint64_value,json=uint64Value,proto3" json:"uint64_value,omitempty"` // contains filtered or unexported fields } A `Property` consists of a user-supplied name/value pair. func (*Property) Descriptor Uses func (*Property) Descriptor() ([]byte, []int) Deprecated: Use Property.ProtoReflect.Descriptor instead. func (*Property) GetName Uses func (x *Property) GetName() string func (*Property) GetUint64Value Uses func (x *Property) GetUint64Value() uint64 func (*Property) GetValue Uses func (x *Property) GetValue() string func (*Property) ProtoMessage Uses func (*Property) ProtoMessage() func (*Property) ProtoReflect Uses func (x *Property) ProtoReflect() protoreflect.Message func (*Property) Reset Uses func (x *Property) Reset() func (*Property) String Uses func (x *Property) String() string type SafeSearchAnnotation Uses type SafeSearchAnnotation struct { // Represents the adult content likelihood for the image. Adult content may // contain elements such as nudity, pornographic images or cartoons, or // sexual activities. Adult Likelihood `protobuf:"varint,1,opt,name=adult,proto3,enum=google.cloud.vision.v1p1beta1.Likelihood" json:"adult,omitempty"` // Spoof likelihood. The likelihood that an modification // was made to the image's canonical version to make it appear // funny or offensive. Spoof Likelihood `protobuf:"varint,2,opt,name=spoof,proto3,enum=google.cloud.vision.v1p1beta1.Likelihood" json:"spoof,omitempty"` // Likelihood that this is a medical image. Medical Likelihood `protobuf:"varint,3,opt,name=medical,proto3,enum=google.cloud.vision.v1p1beta1.Likelihood" json:"medical,omitempty"` // Likelihood that this image contains violent content. Violence Likelihood `protobuf:"varint,4,opt,name=violence,proto3,enum=google.cloud.vision.v1p1beta1.Likelihood" json:"violence,omitempty"` // Likelihood that the request image contains racy content. Racy content may // include (but is not limited to) skimpy or sheer clothing, strategically // covered nudity, lewd or provocative poses, or close-ups of sensitive // body areas. Racy Likelihood `protobuf:"varint,9,opt,name=racy,proto3,enum=google.cloud.vision.v1p1beta1.Likelihood" json:"racy,omitempty"` // contains filtered or unexported fields } Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence). func (*SafeSearchAnnotation) Descriptor Uses func (*SafeSearchAnnotation) Descriptor() ([]byte, []int) Deprecated: Use SafeSearchAnnotation.ProtoReflect.Descriptor instead. func (*SafeSearchAnnotation) GetAdult Uses func (x *SafeSearchAnnotation) GetAdult() Likelihood func (*SafeSearchAnnotation) GetMedical Uses func (x *SafeSearchAnnotation) GetMedical() Likelihood func (*SafeSearchAnnotation) GetRacy Uses func (x *SafeSearchAnnotation) GetRacy() Likelihood func (*SafeSearchAnnotation) GetSpoof Uses func (x *SafeSearchAnnotation) GetSpoof() Likelihood func (*SafeSearchAnnotation) GetViolence Uses func (x *SafeSearchAnnotation) GetViolence() Likelihood func (*SafeSearchAnnotation) ProtoMessage Uses func (*SafeSearchAnnotation) ProtoMessage() func (*SafeSearchAnnotation) ProtoReflect Uses func (x *SafeSearchAnnotation) ProtoReflect() protoreflect.Message func (*SafeSearchAnnotation) Reset Uses func (x *SafeSearchAnnotation) Reset() func (*SafeSearchAnnotation) String Uses func (x *SafeSearchAnnotation) String() string type Symbol Uses type Symbol struct { // Additional information detected for the symbol. Property *TextAnnotation_TextProperty `protobuf:"bytes,1,opt,name=property,proto3" json:"property,omitempty"` // The bounding box for the symbol. // The vertices are in the order of top-left, top-right, bottom-right, // bottom-left. When a rotation of the bounding box is detected the rotation // is represented as around the top-left corner as defined when the text is // read in the 'natural' orientation. // For example: // * when the text is horizontal it might look like: // 0----1 // | | // 3----2 // * when it's rotated 180 degrees around the top-left corner it becomes: // 2----3 // | | // 1----0 // and the vertice order will still be (0, 1, 2, 3). BoundingBox *BoundingPoly `protobuf:"bytes,2,opt,name=bounding_box,json=boundingBox,proto3" json:"bounding_box,omitempty"` // The actual UTF-8 representation of the symbol. Text string `protobuf:"bytes,3,opt,name=text,proto3" json:"text,omitempty"` // Confidence of the OCR results for the symbol. Range [0, 1]. Confidence float32 `protobuf:"fixed32,4,opt,name=confidence,proto3" json:"confidence,omitempty"` // contains filtered or unexported fields } A single symbol representation. func (*Symbol) Descriptor Uses func (*Symbol) Descriptor() ([]byte, []int) Deprecated: Use Symbol.ProtoReflect.Descriptor instead. func (*Symbol) GetBoundingBox Uses func (x *Symbol) GetBoundingBox() *BoundingPoly func (*Symbol) GetConfidence Uses func (x *Symbol) GetConfidence() float32 func (*Symbol) GetProperty Uses func (x *Symbol) GetProperty() *TextAnnotation_TextProperty func (*Symbol) GetText Uses func (x *Symbol) GetText() string func (*Symbol) ProtoMessage Uses func (*Symbol) ProtoMessage() func (*Symbol) ProtoReflect Uses func (x *Symbol) ProtoReflect() protoreflect.Message func (*Symbol) Reset Uses func (x *Symbol) Reset() func (*Symbol) String Uses func (x *Symbol) String() string type TextAnnotation Uses type TextAnnotation struct { // List of pages detected by OCR. Pages []*Page `protobuf:"bytes,1,rep,name=pages,proto3" json:"pages,omitempty"` // UTF-8 text detected on the pages. Text string `protobuf:"bytes,2,opt,name=text,proto3" json:"text,omitempty"` // contains filtered or unexported fields } TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the [TextAnnotation.TextProperty][google.cloud.vision.v1p1beta1.TextAnnotation.TextProperty] message definition below for more detail. func (*TextAnnotation) Descriptor Uses func (*TextAnnotation) Descriptor() ([]byte, []int) Deprecated: Use TextAnnotation.ProtoReflect.Descriptor instead. func (*TextAnnotation) GetPages Uses func (x *TextAnnotation) GetPages() []*Page func (*TextAnnotation) GetText Uses func (x *TextAnnotation) GetText() string func (*TextAnnotation) ProtoMessage Uses func (*TextAnnotation) ProtoMessage() func (*TextAnnotation) ProtoReflect Uses func (x *TextAnnotation) ProtoReflect() protoreflect.Message func (*TextAnnotation) Reset Uses func (x *TextAnnotation) Reset() func (*TextAnnotation) String Uses func (x *TextAnnotation) String() string type TextAnnotation_DetectedBreak Uses type TextAnnotation_DetectedBreak struct { // Detected break type. Type TextAnnotation_DetectedBreak_BreakType `protobuf:"varint,1,opt,name=type,proto3,enum=google.cloud.vision.v1p1beta1.TextAnnotation_DetectedBreak_BreakType" json:"type,omitempty"` // True if break prepends the element. IsPrefix bool `protobuf:"varint,2,opt,name=is_prefix,json=isPrefix,proto3" json:"is_prefix,omitempty"` // contains filtered or unexported fields } Detected start or end of a structural component. func (*TextAnnotation_DetectedBreak) Descriptor Uses func (*TextAnnotation_DetectedBreak) Descriptor() ([]byte, []int) Deprecated: Use TextAnnotation_DetectedBreak.ProtoReflect.Descriptor instead. func (*TextAnnotation_DetectedBreak) GetIsPrefix Uses func (x *TextAnnotation_DetectedBreak) GetIsPrefix() bool func (*TextAnnotation_DetectedBreak) GetType Uses func (x *TextAnnotation_DetectedBreak) GetType() TextAnnotation_DetectedBreak_BreakType func (*TextAnnotation_DetectedBreak) ProtoMessage Uses func (*TextAnnotation_DetectedBreak) ProtoMessage() func (*TextAnnotation_DetectedBreak) ProtoReflect Uses func (x *TextAnnotation_DetectedBreak) ProtoReflect() protoreflect.Message func (*TextAnnotation_DetectedBreak) Reset Uses func (x *TextAnnotation_DetectedBreak) Reset() func (*TextAnnotation_DetectedBreak) String Uses func (x *TextAnnotation_DetectedBreak) String() string type TextAnnotation_DetectedBreak_BreakType Uses type TextAnnotation_DetectedBreak_BreakType int32 Enum to denote the type of break found. New line, space etc. const ( // Unknown break label type. TextAnnotation_DetectedBreak_UNKNOWN TextAnnotation_DetectedBreak_BreakType = 0 // Regular space. TextAnnotation_DetectedBreak_SPACE TextAnnotation_DetectedBreak_BreakType = 1 // Sure space (very wide). TextAnnotation_DetectedBreak_SURE_SPACE TextAnnotation_DetectedBreak_BreakType = 2 // Line-wrapping break. TextAnnotation_DetectedBreak_EOL_SURE_SPACE TextAnnotation_DetectedBreak_BreakType = 3 // End-line hyphen that is not present in text; does not co-occur with // `SPACE`, `LEADER_SPACE`, or `LINE_BREAK`. TextAnnotation_DetectedBreak_HYPHEN TextAnnotation_DetectedBreak_BreakType = 4 // Line break that ends a paragraph. TextAnnotation_DetectedBreak_LINE_BREAK TextAnnotation_DetectedBreak_BreakType = 5 ) func (TextAnnotation_DetectedBreak_BreakType) Descriptor Uses func (TextAnnotation_DetectedBreak_BreakType) Descriptor() protoreflect.EnumDescriptor func (TextAnnotation_DetectedBreak_BreakType) Enum Uses func (x TextAnnotation_DetectedBreak_BreakType) Enum() *TextAnnotation_DetectedBreak_BreakType func (TextAnnotation_DetectedBreak_BreakType) EnumDescriptor Uses func (TextAnnotation_DetectedBreak_BreakType) EnumDescriptor() ([]byte, []int) Deprecated: Use TextAnnotation_DetectedBreak_BreakType.Descriptor instead. func (TextAnnotation_DetectedBreak_BreakType) Number Uses func (x TextAnnotation_DetectedBreak_BreakType) Number() protoreflect.EnumNumber func (TextAnnotation_DetectedBreak_BreakType) String Uses func (x TextAnnotation_DetectedBreak_BreakType) String() string func (TextAnnotation_DetectedBreak_BreakType) Type Uses func (TextAnnotation_DetectedBreak_BreakType) Type() protoreflect.EnumType type TextAnnotation_DetectedLanguage Uses type TextAnnotation_DetectedLanguage struct { // The BCP-47 language code, such as "en-US" or "sr-Latn". For more // information, see // http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. LanguageCode string `protobuf:"bytes,1,opt,name=language_code,json=languageCode,proto3" json:"language_code,omitempty"` // Confidence of detected language. Range [0, 1]. Confidence float32 `protobuf:"fixed32,2,opt,name=confidence,proto3" json:"confidence,omitempty"` // contains filtered or unexported fields } Detected language for a structural component. func (*TextAnnotation_DetectedLanguage) Descriptor Uses func (*TextAnnotation_DetectedLanguage) Descriptor() ([]byte, []int) Deprecated: Use TextAnnotation_DetectedLanguage.ProtoReflect.Descriptor instead. func (*TextAnnotation_DetectedLanguage) GetConfidence Uses func (x *TextAnnotation_DetectedLanguage) GetConfidence() float32 func (*TextAnnotation_DetectedLanguage) GetLanguageCode Uses func (x *TextAnnotation_DetectedLanguage) GetLanguageCode() string func (*TextAnnotation_DetectedLanguage) ProtoMessage Uses func (*TextAnnotation_DetectedLanguage) ProtoMessage() func (*TextAnnotation_DetectedLanguage) ProtoReflect Uses func (x *TextAnnotation_DetectedLanguage) ProtoReflect() protoreflect.Message func (*TextAnnotation_DetectedLanguage) Reset Uses func (x *TextAnnotation_DetectedLanguage) Reset() func (*TextAnnotation_DetectedLanguage) String Uses func (x *TextAnnotation_DetectedLanguage) String() string type TextAnnotation_TextProperty Uses type TextAnnotation_TextProperty struct { // A list of detected languages together with confidence. DetectedLanguages []*TextAnnotation_DetectedLanguage `protobuf:"bytes,1,rep,name=detected_languages,json=detectedLanguages,proto3" json:"detected_languages,omitempty"` // Detected start or end of a text segment. DetectedBreak *TextAnnotation_DetectedBreak `protobuf:"bytes,2,opt,name=detected_break,json=detectedBreak,proto3" json:"detected_break,omitempty"` // contains filtered or unexported fields } Additional information detected on the structural component. func (*TextAnnotation_TextProperty) Descriptor Uses func (*TextAnnotation_TextProperty) Descriptor() ([]byte, []int) Deprecated: Use TextAnnotation_TextProperty.ProtoReflect.Descriptor instead. func (*TextAnnotation_TextProperty) GetDetectedBreak Uses func (x *TextAnnotation_TextProperty) GetDetectedBreak() *TextAnnotation_DetectedBreak func (*TextAnnotation_TextProperty) GetDetectedLanguages Uses func (x *TextAnnotation_TextProperty) GetDetectedLanguages() []*TextAnnotation_DetectedLanguage func (*TextAnnotation_TextProperty) ProtoMessage Uses func (*TextAnnotation_TextProperty) ProtoMessage() func (*TextAnnotation_TextProperty) ProtoReflect Uses func (x *TextAnnotation_TextProperty) ProtoReflect() protoreflect.Message func (*TextAnnotation_TextProperty) Reset Uses func (x *TextAnnotation_TextProperty) Reset() func (*TextAnnotation_TextProperty) String Uses func (x *TextAnnotation_TextProperty) String() string type TextDetectionParams Uses type TextDetectionParams struct { // By default, Cloud Vision API only includes confidence score for // DOCUMENT_TEXT_DETECTION result. Set the flag to true to include confidence // score for TEXT_DETECTION as well. EnableTextDetectionConfidenceScore bool `protobuf:"varint,9,opt,name=enable_text_detection_confidence_score,json=enableTextDetectionConfidenceScore,proto3" json:"enable_text_detection_confidence_score,omitempty"` // contains filtered or unexported fields } Parameters for text detections. This is used to control TEXT_DETECTION and DOCUMENT_TEXT_DETECTION features. func (*TextDetectionParams) Descriptor Uses func (*TextDetectionParams) Descriptor() ([]byte, []int) Deprecated: Use TextDetectionParams.ProtoReflect.Descriptor instead. func (*TextDetectionParams) GetEnableTextDetectionConfidenceScore Uses func (x *TextDetectionParams) GetEnableTextDetectionConfidenceScore() bool func (*TextDetectionParams) ProtoMessage Uses func (*TextDetectionParams) ProtoMessage() func (*TextDetectionParams) ProtoReflect Uses func (x *TextDetectionParams) ProtoReflect() protoreflect.Message func (*TextDetectionParams) Reset Uses func (x *TextDetectionParams) Reset() func (*TextDetectionParams) String Uses func (x *TextDetectionParams) String() string type UnimplementedImageAnnotatorServer Uses type UnimplementedImageAnnotatorServer struct { } UnimplementedImageAnnotatorServer can be embedded to have forward compatible implementations. func (*UnimplementedImageAnnotatorServer) BatchAnnotateImages Uses func (*UnimplementedImageAnnotatorServer) BatchAnnotateImages(context.Context, *BatchAnnotateImagesRequest) (*BatchAnnotateImagesResponse, error) type Vertex Uses type Vertex struct { // X coordinate. X int32 `protobuf:"varint,1,opt,name=x,proto3" json:"x,omitempty"` // Y coordinate. Y int32 `protobuf:"varint,2,opt,name=y,proto3" json:"y,omitempty"` // contains filtered or unexported fields } A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image. func (*Vertex) Descriptor Uses func (*Vertex) Descriptor() ([]byte, []int) Deprecated: Use Vertex.ProtoReflect.Descriptor instead. func (*Vertex) GetX Uses func (x *Vertex) GetX() int32 func (*Vertex) GetY Uses func (x *Vertex) GetY() int32 func (*Vertex) ProtoMessage Uses func (*Vertex) ProtoMessage() func (*Vertex) ProtoReflect Uses func (x *Vertex) ProtoReflect() protoreflect.Message func (*Vertex) Reset Uses func (x *Vertex) Reset() func (*Vertex) String Uses func (x *Vertex) String() string type WebDetection Uses type WebDetection struct { // Deduced entities from similar images on the Internet. WebEntities []*WebDetection_WebEntity `protobuf:"bytes,1,rep,name=web_entities,json=webEntities,proto3" json:"web_entities,omitempty"` // Fully matching images from the Internet. // Can include resized copies of the query image. FullMatchingImages []*WebDetection_WebImage `protobuf:"bytes,2,rep,name=full_matching_images,json=fullMatchingImages,proto3" json:"full_matching_images,omitempty"` // Partial matching images from the Internet. // Those images are similar enough to share some key-point features. For // example an original image will likely have partial matching for its crops. PartialMatchingImages []*WebDetection_WebImage `protobuf:"bytes,3,rep,name=partial_matching_images,json=partialMatchingImages,proto3" json:"partial_matching_images,omitempty"` // Web pages containing the matching images from the Internet. PagesWithMatchingImages []*WebDetection_WebPage `protobuf:"bytes,4,rep,name=pages_with_matching_images,json=pagesWithMatchingImages,proto3" json:"pages_with_matching_images,omitempty"` // The visually similar image results. VisuallySimilarImages []*WebDetection_WebImage `protobuf:"bytes,6,rep,name=visually_similar_images,json=visuallySimilarImages,proto3" json:"visually_similar_images,omitempty"` // Best guess text labels for the request image. BestGuessLabels []*WebDetection_WebLabel `protobuf:"bytes,8,rep,name=best_guess_labels,json=bestGuessLabels,proto3" json:"best_guess_labels,omitempty"` // contains filtered or unexported fields } Relevant information for the image from the Internet. func (*WebDetection) Descriptor Uses func (*WebDetection) Descriptor() ([]byte, []int) Deprecated: Use WebDetection.ProtoReflect.Descriptor instead. func (*WebDetection) GetBestGuessLabels Uses func (x *WebDetection) GetBestGuessLabels() []*WebDetection_WebLabel func (*WebDetection) GetFullMatchingImages Uses func (x *WebDetection) GetFullMatchingImages() []*WebDetection_WebImage func (*WebDetection) GetPagesWithMatchingImages Uses func (x *WebDetection) GetPagesWithMatchingImages() []*WebDetection_WebPage func (*WebDetection) GetPartialMatchingImages Uses func (x *WebDetection) GetPartialMatchingImages() []*WebDetection_WebImage func (*WebDetection) GetVisuallySimilarImages Uses func (x *WebDetection) GetVisuallySimilarImages() []*WebDetection_WebImage func (*WebDetection) GetWebEntities Uses func (x *WebDetection) GetWebEntities() []*WebDetection_WebEntity func (*WebDetection) ProtoMessage Uses func (*WebDetection) ProtoMessage() func (*WebDetection) ProtoReflect Uses func (x *WebDetection) ProtoReflect() protoreflect.Message func (*WebDetection) Reset Uses func (x *WebDetection) Reset() func (*WebDetection) String Uses func (x *WebDetection) String() string type WebDetectionParams Uses type WebDetectionParams struct { // Whether to include results derived from the geo information in the image. IncludeGeoResults bool `protobuf:"varint,2,opt,name=include_geo_results,json=includeGeoResults,proto3" json:"include_geo_results,omitempty"` // contains filtered or unexported fields } Parameters for web detection request. func (*WebDetectionParams) Descriptor Uses func (*WebDetectionParams) Descriptor() ([]byte, []int) Deprecated: Use WebDetectionParams.ProtoReflect.Descriptor instead. func (*WebDetectionParams) GetIncludeGeoResults Uses func (x *WebDetectionParams) GetIncludeGeoResults() bool func (*WebDetectionParams) ProtoMessage Uses func (*WebDetectionParams) ProtoMessage() func (*WebDetectionParams) ProtoReflect Uses func (x *WebDetectionParams) ProtoReflect() protoreflect.Message func (*WebDetectionParams) Reset Uses func (x *WebDetectionParams) Reset() func (*WebDetectionParams) String Uses func (x *WebDetectionParams) String() string type WebDetection_WebEntity Uses type WebDetection_WebEntity struct { // Opaque entity ID. EntityId string `protobuf:"bytes,1,opt,name=entity_id,json=entityId,proto3" json:"entity_id,omitempty"` // Overall relevancy score for the entity. // Not normalized and not comparable across different image queries. Score float32 `protobuf:"fixed32,2,opt,name=score,proto3" json:"score,omitempty"` // Canonical description of the entity, in English. Description string `protobuf:"bytes,3,opt,name=description,proto3" json:"description,omitempty"` // contains filtered or unexported fields } Entity deduced from similar images on the Internet. func (*WebDetection_WebEntity) Descriptor Uses func (*WebDetection_WebEntity) Descriptor() ([]byte, []int) Deprecated: Use WebDetection_WebEntity.ProtoReflect.Descriptor instead. func (*WebDetection_WebEntity) GetDescription Uses func (x *WebDetection_WebEntity) GetDescription() string func (*WebDetection_WebEntity) GetEntityId Uses func (x *WebDetection_WebEntity) GetEntityId() string func (*WebDetection_WebEntity) GetScore Uses func (x *WebDetection_WebEntity) GetScore() float32 func (*WebDetection_WebEntity) ProtoMessage Uses func (*WebDetection_WebEntity) ProtoMessage() func (*WebDetection_WebEntity) ProtoReflect Uses func (x *WebDetection_WebEntity) ProtoReflect() protoreflect.Message func (*WebDetection_WebEntity) Reset Uses func (x *WebDetection_WebEntity) Reset() func (*WebDetection_WebEntity) String Uses func (x *WebDetection_WebEntity) String() string type WebDetection_WebImage Uses type WebDetection_WebImage struct { // The result image URL. Url string `protobuf:"bytes,1,opt,name=url,proto3" json:"url,omitempty"` // (Deprecated) Overall relevancy score for the image. Score float32 `protobuf:"fixed32,2,opt,name=score,proto3" json:"score,omitempty"` // contains filtered or unexported fields } Metadata for online images. func (*WebDetection_WebImage) Descriptor Uses func (*WebDetection_WebImage) Descriptor() ([]byte, []int) Deprecated: Use WebDetection_WebImage.ProtoReflect.Descriptor instead. func (*WebDetection_WebImage) GetScore Uses func (x *WebDetection_WebImage) GetScore() float32 func (*WebDetection_WebImage) GetUrl Uses func (x *WebDetection_WebImage) GetUrl() string func (*WebDetection_WebImage) ProtoMessage Uses func (*WebDetection_WebImage) ProtoMessage() func (*WebDetection_WebImage) ProtoReflect Uses func (x *WebDetection_WebImage) ProtoReflect() protoreflect.Message func (*WebDetection_WebImage) Reset Uses func (x *WebDetection_WebImage) Reset() func (*WebDetection_WebImage) String Uses func (x *WebDetection_WebImage) String() string type WebDetection_WebLabel Uses type WebDetection_WebLabel struct { // Label for extra metadata. Label string `protobuf:"bytes,1,opt,name=label,proto3" json:"label,omitempty"` // The BCP-47 language code for `label`, such as "en-US" or "sr-Latn". // For more information, see // http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. LanguageCode string `protobuf:"bytes,2,opt,name=language_code,json=languageCode,proto3" json:"language_code,omitempty"` // contains filtered or unexported fields } Label to provide extra metadata for the web detection. func (*WebDetection_WebLabel) Descriptor Uses func (*WebDetection_WebLabel) Descriptor() ([]byte, []int) Deprecated: Use WebDetection_WebLabel.ProtoReflect.Descriptor instead. func (*WebDetection_WebLabel) GetLabel Uses func (x *WebDetection_WebLabel) GetLabel() string func (*WebDetection_WebLabel) GetLanguageCode Uses func (x *WebDetection_WebLabel) GetLanguageCode() string func (*WebDetection_WebLabel) ProtoMessage Uses func (*WebDetection_WebLabel) ProtoMessage() func (*WebDetection_WebLabel) ProtoReflect Uses func (x *WebDetection_WebLabel) ProtoReflect() protoreflect.Message func (*WebDetection_WebLabel) Reset Uses func (x *WebDetection_WebLabel) Reset() func (*WebDetection_WebLabel) String Uses func (x *WebDetection_WebLabel) String() string type WebDetection_WebPage Uses type WebDetection_WebPage struct { // The result web page URL. Url string `protobuf:"bytes,1,opt,name=url,proto3" json:"url,omitempty"` // (Deprecated) Overall relevancy score for the web page. Score float32 `protobuf:"fixed32,2,opt,name=score,proto3" json:"score,omitempty"` // Title for the web page, may contain HTML markups. PageTitle string `protobuf:"bytes,3,opt,name=page_title,json=pageTitle,proto3" json:"page_title,omitempty"` // Fully matching images on the page. // Can include resized copies of the query image. FullMatchingImages []*WebDetection_WebImage `protobuf:"bytes,4,rep,name=full_matching_images,json=fullMatchingImages,proto3" json:"full_matching_images,omitempty"` // Partial matching images on the page. // Those images are similar enough to share some key-point features. For // example an original image will likely have partial matching for its // crops. PartialMatchingImages []*WebDetection_WebImage `protobuf:"bytes,5,rep,name=partial_matching_images,json=partialMatchingImages,proto3" json:"partial_matching_images,omitempty"` // contains filtered or unexported fields } Metadata for web pages. func (*WebDetection_WebPage) Descriptor Uses func (*WebDetection_WebPage) Descriptor() ([]byte, []int) Deprecated: Use WebDetection_WebPage.ProtoReflect.Descriptor instead. func (*WebDetection_WebPage) GetFullMatchingImages Uses func (x *WebDetection_WebPage) GetFullMatchingImages() []*WebDetection_WebImage func (*WebDetection_WebPage) GetPageTitle Uses func (x *WebDetection_WebPage) GetPageTitle() string func (*WebDetection_WebPage) GetPartialMatchingImages Uses func (x *WebDetection_WebPage) GetPartialMatchingImages() []*WebDetection_WebImage func (*WebDetection_WebPage) GetScore Uses func (x *WebDetection_WebPage) GetScore() float32 func (*WebDetection_WebPage) GetUrl Uses func (x *WebDetection_WebPage) GetUrl() string func (*WebDetection_WebPage) ProtoMessage Uses func (*WebDetection_WebPage) ProtoMessage() func (*WebDetection_WebPage) ProtoReflect Uses func (x *WebDetection_WebPage) ProtoReflect() protoreflect.Message func (*WebDetection_WebPage) Reset Uses func (x *WebDetection_WebPage) Reset() func (*WebDetection_WebPage) String Uses func (x *WebDetection_WebPage) String() string type Word Uses type Word struct { // Additional information detected for the word. Property *TextAnnotation_TextProperty `protobuf:"bytes,1,opt,name=property,proto3" json:"property,omitempty"` // The bounding box for the word. // The vertices are in the order of top-left, top-right, bottom-right, // bottom-left. When a rotation of the bounding box is detected the rotation // is represented as around the top-left corner as defined when the text is // read in the 'natural' orientation. // For example: // * when the text is horizontal it might look like: // 0----1 // | | // 3----2 // * when it's rotated 180 degrees around the top-left corner it becomes: // 2----3 // | | // 1----0 // and the vertice order will still be (0, 1, 2, 3). BoundingBox *BoundingPoly `protobuf:"bytes,2,opt,name=bounding_box,json=boundingBox,proto3" json:"bounding_box,omitempty"` // List of symbols in the word. // The order of the symbols follows the natural reading order. Symbols []*Symbol `protobuf:"bytes,3,rep,name=symbols,proto3" json:"symbols,omitempty"` // Confidence of the OCR results for the word. Range [0, 1]. Confidence float32 `protobuf:"fixed32,4,opt,name=confidence,proto3" json:"confidence,omitempty"` // contains filtered or unexported fields } A word representation. func (*Word) Descriptor Uses func (*Word) Descriptor() ([]byte, []int) Deprecated: Use Word.ProtoReflect.Descriptor instead. func (*Word) GetBoundingBox Uses func (x *Word) GetBoundingBox() *BoundingPoly func (*Word) GetConfidence Uses func (x *Word) GetConfidence() float32 func (*Word) GetProperty Uses func (x *Word) GetProperty() *TextAnnotation_TextProperty func (*Word) GetSymbols Uses func (x *Word) GetSymbols() []*Symbol func (*Word) ProtoMessage Uses func (*Word) ProtoMessage() func (*Word) ProtoReflect Uses func (x *Word) ProtoReflect() protoreflect.Message func (*Word) Reset Uses func (x *Word) Reset() func (*Word) String Uses func (x *Word) String() string Package vision imports 13 packages (graph) and is imported by 1 packages. Updated 2020-12-03. Refresh now. Tools for package owners.
__label__pos
0.992131
How To Make A Smart Contract For Nft? Jacky Binance Buy and Sell Crypto Part 1/3 of the NFT Tutorial Series: How to Write and Deploy an NFT Connect to the Ethereum network in step one. Step 2: Develop your application (and API key) Step 3: Open an Ethereum wallet (address) Step 4: Fill a Faucet with ether. Step 5: Double-check your account balance. Step 6: Begin working on our project. Install Hardhat in step 7. Similarly, How do I get NFT smart contract? Let’s get going! Install web3 first. Step 2: Make a file called mint-nft.js. Step 3: Obtain your ABI contract. Step 4: Using IPFS, configure the metadata for your NFT. Create an instance of your contract in step 5. Update the in step six. Create your transaction in step 7. Step 8: Sign the agreement. Also, it is asked, Do you need a smart contract for NFT? A smart contract, like a vending machine, is arguably a mechanism for implementing a sale agreement between the NFT owner and the buyer. Because smart contracts are self-executing, they may check that the contract’s conditions have been followed and execute them without the intervention of a third party or central authority. Secondly, How do I create a smart contract? How to Create an Ethereum Smart Contract in Steps Step 1: Go to meta-mask and create a wallet. Install MetaMask and activate it in your Chrome browser. Step 2: Pick one of the test networks. The following test networks may also be found in your MetaMask wallet: Step 3: Fill your wallet with phony Ethers. Also, How much is an NFT smart contract? Conversation. On #ethereum, I just launched a new (#ERC720 #nft) smart-contract. Cost $436 You must make certain that you have the ideal code, features, and security checks. People also ask, Does minting an NFT cost money? If you want to go the conventional approach and start minting your NFTs right now, expect to spend between $50 and $150 each NFT. The overall cost of minting 10,000 NFTs might vary from $500,000 to $1.5 million. Related Questions and Answers How much does it cost to create an NFT? However, although the cost of generating an NFT may be less than a $1, the cost of selling one might be thousands of dollars. Allen Gannett, a software engineer, spent roughly $1,300 on four NFTs, which he sold for $76 on eBay. He also had to pay an additional $88 for the bid.   Where To Buy Apes In Space Nft? How do I start a NFT collection? You should set aside some NFTs for airdrops, freebies, and staff before you start selling them. It’s entirely up to you if you want 100, 500, or whatever many. Your account and collection should have been established in OpenSea after minting some of your NFTs. Connect to your wallet on OpenSea and create your collection. Is it hard to create a smart contract? Smart contract creation may be a marketable ability for those who understand how to do it. Smart contracts aren’t difficult to create, which is a pleasant surprise. The DApp platforms and associated tools make it simple to construct and deploy your own blockchain technology. How are smart contracts written? Simple “if/when.then.” lines are inserted into code on a blockchain to make smart contracts operate. When preset circumstances are satisfied and validated, the activities are carried out by a network of computers. How much does it cost to start a NFT project? Transaction Fee for NFT For example, OpenSea charges a US$70-US$300 account setup fee, with NFT access costing roughly US$10-US$30. When a product is sold, an extra 2.5 percent is added to the price. How much does it cost to deploy NFT contract on ETH? It would cost us 0.308 ETH to deploy the contract on the Ethereum mainnet right now. Our deployment cost would be $1350 at current pricing. How much does it cost to write a smart contract? The quick answer is that deploying even a very basic smart contract may cost upwards of USD 500. The cost of deploying any significant application on the Ethereum mainnet may easily exceed $10,000. How much does the average NFT sell for? According to industry data tracker NonFungible, the average selling price of a nonfungible token has dropped to about $2,000, down from an all-time high of roughly $6,900 on Jan. 2. Can I Mint an NFT for free? With one big condition, you may mint NFTs for free on OpenSea. Here’s how to make NFTs for nothing: Connect your OpenSea account to an Ethereum wallet. The wallet may be either Coinbase or MetaMask.   What Is A Nft Record Label? How can I sell NFT for free? Free NFT Creation and Distribution OpenSea requires an ETH wallet. To begin, you must first link an Ethereum wallet to OpenSea. Make a collection in OpenSea. Create an OpenSea Collection. Choose the right blockchain. Start producing NFTs. Profit!. How does someone sell an NFT? Simply move the NFT to the marketplace where you wish to sell it (if it isn’t already there, or if your NFTs are only stored in your personal crypto wallet and aren’t accessible to be seen on a marketplace). Then, from inside the page of the NFT you wish to sell, click the “Sell” button. Where can I sell NFT art? OpenSea is one of the best NFT marketplaces for creators to sell their work. Rarible. SuperRare.Foundation. AtomicMarket. Market of Myths BakerySwap. KnownOrigin. Can anyone make a smart contract? A smart contract may be written by anybody and deployed to the network. All you need to do is understand how to write in a smart contract language and have enough ETH to launch your contract. How long does it take to write a smart contract? Discovery may take anything from two weeks (one sprint) to two months to accomplish. Then we transition and plan out the smart contract architecture. This phase is crucial and needs a great lot of planning and care. Which smart contract is the best? Snowfall (AVAX) In terms of finality, Avalanche claims to be the quickest smart contract platform, even surpassing Solana. It does this by using three separate blockchains on its system, while also keeping costs low and assuring the platform’s scalability. Is it better to mint or buy NFT? Furthermore, purchasing an NFT at market might help you save money overall. When network traffic is minimal, timing your purchase might save you money on gas costs, allowing you to come in at a lower price point than minting. How do I create an NFT minting site Ethereum? Install web3 first. Create a mint-nft. js file in step two. Step 3: Obtain your ABI contract. Step 4: Using IPFS, configure the metadata for your NFT. Create an instance of your contract in step 5. Step 6: Make changes to the. env file. Create your transaction in step 7. Step 8: Sign the agreement.   What Is A Candy Jersey Nft? How does a smart contract look like? A smart contract is a computer code-based agreement between two persons. They are kept in a public database and cannot be modified because they operate on the blockchain. A smart contract’s transactions are handled by the blockchain, which means they may be delivered automatically without the involvement of a third party. How much does it cost to sell NFT art? What is the cost of selling an NFT? Based on OpenSea rankings, the “averageprice of an NFT sold on SuperRare is presently two dollars. MakersPlace has a “averageprice of $5,800; 15 ether costs 5.80. A Foundation transaction costs 87 ether ($2,400), whereas a “regular” transaction costs 1. How much does it cost to deploy a NFT contract? An ERC-721 smart contract may cost anywhere from $400 to $2,000, depending on the current gas price on Ethereum. How much does it cost to mint an NFT on polygon? FlashNFT is a program that helps authors turn their material into NFTs rapidly. The minting is done on Layer2 (Polygon), which saves money on gas. Problem: Some suppliers charge about $80 to generate a single NFT token. Conclusion Nft is a smart contract language, that allows users to create and deploy an nft smart contract in 10 minutes. This article will show you how to make your own nft smart contract. This Video Should Help: The “opensea smart contract” is a new and innovative way to make sports betting. The project has been in development for quite some time now and the developers are looking for people who want to get involved. Related Tags • nft smart contract github • nft smart contract example • nft collection smart contract • nft code example • how to code an nft collection Table of Content
__label__pos
0.999311
Distributing Data Across InitVerse Using Blockchain Distributing Data Across InitVerse Using Blockchain: An Analytical Review Share This Post Blockchain technology has revolutionized various industries by enabling secure and efficient transactions. However, its potential extends beyond just financial applications. One area where blockchain can have a transformative impact is in the distribution of data. By leveraging the power of blockchain, data can be securely distributed across a network, ensuring transparency, immutability, and reliability. In this article, we will explore the potential of blockchain in distributing data across InitVerse, a virtual reality platform, and discuss how this solution can enhance efficiency. Understanding the Potential of Blockchain in Distributing Data Blockchain technology provides several key advantages for distributing data. Firstly, it ensures transparency by storing data in a decentralized manner across multiple nodes in a network. This means that every participant in the network has access to the same set of data, eliminating the need for intermediaries and reducing the risk of data manipulation. Additionally, blockchain’s immutability ensures that once data is recorded on the blockchain, it cannot be altered or tampered with, providing a reliable and auditable source of information. Furthermore, blockchain enables the establishment of smart contracts, which are self-executing contracts with predefined rules. These contracts can automate various processes in data distribution, such as licensing, royalties, and payments. By eliminating the need for intermediaries and manual interventions, smart contracts reduce costs, increase efficiency, and enhance trust among participants. Leveraging InitVerse for Efficient Distribution: A Blockchain Solution InitVerse, a virtual reality platform, can greatly benefit from the implementation of blockchain technology for data distribution. In InitVerse, users create, share, and interact with virtual reality content. By leveraging blockchain, InitVerse can ensure the secure and efficient distribution of this content. Through the use of blockchain, InitVerse can establish a decentralized network of nodes that store and verify the authenticity of each piece of virtual reality content. This ensures transparency and eliminates the risk of unauthorized modifications or copyright infringements. Additionally, smart contracts can facilitate seamless licensing and royalty payments for content creators, making the distribution process more streamlined and fair. The potential of blockchain in distributing data across InitVerse is immense. By leveraging the transparency, immutability, and automation capabilities of blockchain technology, InitVerse can provide a secure and efficient environment for content creators and users. The decentralized nature of blockchain ensures that data is distributed in a transparent manner, eliminating the need for intermediaries and reducing the risk of data manipulation. With the implementation of smart contracts, InitVerse can automate various processes, reducing costs and increasing efficiency. As the virtual reality industry continues to grow, blockchain can play a pivotal role in revolutionizing the distribution of data in InitVerse and similar platforms. Subscribe To Our Newsletter Get updates and learn from the best More To Explore Do You Want To Boost Your Business? drop us a line and keep in touch
__label__pos
0.9993
Python Scrapy数据抓取——信息入库 2016-7-5 邪哥 接前面的抓取项目创建,这边主要介绍,抓取数据的入库 关于页面的分级抓取等等,如上节所说,可自行查阅scrapy的相关文档,也可关注后续的篇章 好了废话完毕,咱们首先创建MySQL数据表 这边直接在之前安装好的mysql.test库中进行创建 CREATE TABLE `base_province` ( `province_code` tinyint(2) NOT NULL, `province_name` varchar(8) DEFAULT '' COMMENT '省份名称', `codenum` bigint(12) DEFAULT '0' COMMENT '统一编码', `status` tinyint(1) DEFAULT '0' COMMENT '状态', PRIMARY KEY (`province_code`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='省份信息表'; 然后就是 安装python MySQLdb扩展 #这边鉴于权限问题,依然使用root进行操作 [root@localhost ~]# /usr/local/python/2.7.12/bin/pip install mysql-python Collecting mysql-python Downloading MySQL-python-1.2.5.zip (108kB) 100% |████████████████████████████████| 112kB 65kB/s Installing collected packages: mysql-python Running setup.py install for mysql-python ... done Successfully installed mysql-python-1.2.5 [193942 refs] 修改抓取文件 [root@localhost ~]# su jeen [jeen@localhost root]# /data/jeen/pyscrapy/govstats [jeen@localhost govstats]# vi govstats/spiders/jarea.py #在顶部 引入 MySQLdb import MySQLdb #完善parse方法 print "==Province==URL" + response.url conn = MySQLdb.connect(host='localhost',user='root',passwd='root',port=3306,db='test',charset='utf8') cur = conn.cursor() tds = response.css(".provincetr").xpath('./td') for td in tds: code = td.xpath("./a/@href").extract()[0] code = code.replace('.html','') name = td.xpath(".//text()").extract()[0] name = name.strip() codenum = code + ("0"*(12 - len(code))) value = [ code, name, codenum ] sql1 = "INSERT INTO `base_province` (`province_code`, `province_name`, `codenum`) VALUES " sql2 = " (%s, %s, %s)" sql = "".join([sql1, sql2]) try: cur.execute(sql, value) except: print 'data exist' print ":".join(value) conn.commit() cur.close() conn.close() #完成后保存退出 测试抓取... [jeen@localhost govstats]$ scrapy crawl jarea ..... ImportError: libmysqlclient.so.18: cannot open shared object file: No such file or directory [244992 refs] #libmysqlclient.so.18 没找到,需要再次引入,于是咱们参照之前的lnmp环境安装配置再次 #使用root账户进行操作,ps:你可以开多个命令行窗口 [root@localhost ~]# cd soft/ [root@localhost soft]# cat lnmysql.sh src=/usr/local/mysql/lib dest=/usr/lib for i in `ls $src | egrep "libmysql."` do ln -s $src/$i $dest/$i done [root@localhost soft]# ./lnmysql.sh [root@localhost soft]# ldconfig #关于这边创建软链,不重复介绍了 再次尝试抓取入库 [jeen@localhost govstats]$ scrapy crawl jarea .... 64:宁夏回族自治区:640000000000 65:新疆维吾尔自治区:650000000000 2016-07-05 17:20:08 [scrapy] INFO: Closing spider (finished) .... 2016-07-05 17:20:08 [scrapy] INFO: Spider closed (finished) [315847 refs] #抓取完成 成功,去看看数据里面的数据吧 :) 后端 MySql Python Scrapy (0) (1260) 发表评论:
__label__pos
0.800574
## All of the procedure code necessary for Chapter 7 - Densities with discrete data ############################################################################## ## Kernel functions ############################################################################## if(kernel=="epanechnikov") { kk <- function(g) ifelse(abs(g)<=1, 0.75*(1-g^2), 0) } if(kernel=="biweight") { kk <- function(g) ifelse(abs(g)<=1, (15/16)*(1-g^2)^2, 0) } if(kernel=="triweight") { kk <- function(g) ifelse(abs(g)<=1, (35/32)*(1-g^2)^3, 0) } if(kernel=="gaussian") { kk <- function(g) (1/sqrt(2*pi))*exp(-0.5*g^2) } ## Unordered kku <- function(g,lam) ifelse(g==0,1,lam) ## Ordered kko <- function(g,lam) ifelse(g==0,1,lam^abs(g)) ############################################################################### ## Density estimation for the mixed data case ############################################################################### ## Multivariate density Estimation ## Leave-one-out (loo=0) is turned off kdensity <- function(x,data.type=NULL,bw=NULL,loo=0){ x <- as.matrix(x) q <- ncol(x) n <- nrow(x) ones <- rep(1,n) if(is.null(data.type)==TRUE){ data.type <- as.matrix(rep(1,q)) } if(is.null(bw)==TRUE){ ## Rule of thumb bandwidth bw <- rep(0.5,q) for (j in 1:q){ if(data.type[j]==0){ bw[j] <- 1.06*sd(x[,j])*n^(-1/(4+q)) } } } ## creating product bandwidths for cont variables bw.prod <- 1 for (j in 1:q){ if(data.type[j]==0){ bw.prod <- bw.prod*bw[j] } } ## Calculate test statistic ## Creating and storing the kernel matricies KK.store <- matrix(0,nrow=n,ncol=n) ## n by n kernel matrix can be used later for (j in 1:n){ KK <- ones for (jj in 1:q){ if(data.type[jj]==0){ dx <- (x[,jj] - x[j,jj])/bw[jj] KK <- KK*kk(dx) } if(data.type[jj]==1){ dx <- (x[,jj] - x[j,jj]) KK <- KK*kku(dx,bw[jj]) } if(data.type[jj]==2){ dx <- (x[,jj] - x[j,jj]) KK <- KK*kko(dx,bw[jj]) } } KK.store[,j] <- KK if (loo==1){ KK.store[j,j] <- 0 } } ## Estimating the density f est.den <- solve((n-1)*bw.prod)*colSums(KK.store) ## Listing items to send back return.list <- list(est.den=est.den,bw=bw,KK.store=KK.store) return(return.list) }
__label__pos
0.997603
Commit d91ed3f7 authored by Diegodlh's avatar Diegodlh Browse files Merge branch 'field-output' into dev parents a5e99726 4593a6ad import { Webpage } from "../webpage/webpage"; import { Domain } from "./domain"; describe("Domain class", () => { ......@@ -5,3 +6,72 @@ describe("Domain class", () => { expect(new Domain("example.com").domain).toBe("example.com"); }); }); it("does not make citatations for non-applicable template outputs", async () => { const domain = new Domain("example.com", { templates: [ { path: "/template2", label: "non-applicable", fields: [ { fieldname: "itemType", required: true, procedures: [ { selections: [], transformations: [], }, ], }, { fieldname: "title", required: true, procedures: [ { selections: [], transformations: [], }, ], }, ], }, { path: "/template1", label: "applicable", fields: [ { fieldname: "itemType", required: true, procedures: [ { selections: [{ type: "fixed", config: "webpage" }], transformations: [], }, ], }, { fieldname: "title", required: true, procedures: [ { selections: [{ type: "fixed", config: "title" }], transformations: [], }, ], }, ], }, ], }); const target = new Webpage("https://example.com/target"); const translationOutput = await domain.translate(target, { onlyApplicable: false, }); const templateOutputs = translationOutput.translation.outputs; expect(templateOutputs.length).toBe(2); expect(templateOutputs[0].template.applicable).toBe(false); expect(templateOutputs[0].citation).toBeUndefined(); expect(templateOutputs[1].template.applicable).toBe(true); expect(templateOutputs[1].citation).toBeDefined(); }); ......@@ -89,22 +89,16 @@ export class Domain { } // translate target with templates for paths returned above let templateOutputs = await this.templates.translateWith( const templateOutputs = await this.templates.translateWith( target, templatePaths, { tryAllTemplates: options.allTemplates, useFallback: options.forceTemplatePaths === undefined, onlyApplicable: options.onlyApplicable, } ); // parse output with onlyApplicable option if (options.onlyApplicable) { templateOutputs = templateOutputs.filter( (templateOutput) => templateOutput.applicable ); } let baseCitation: MediaWikiBaseFieldCitation | undefined; if (options.fillWithCitoid) { // baseCitation = (await target.cache.citoid.getData()).citation ......@@ -164,17 +158,21 @@ export class Domain { baseCitation?: MediaWikiBaseFieldCitation ): Translation { // create citoid citations from output const citation = this.makeCitation( templateOutput.outputs, // With this we are setting the output citation's URL to that of the // target Webpage object, which does not follow redirects. // We may change this to the final response URL, but there may be cases // where we do not want to do that (see T210871). // Alternatively, we may let users manually change this using a URL // template field. templateOutput.target.url.href, baseCitation ); let citation; if (templateOutput.applicable) { // only make citations for applicable template outputs citation = this.makeCitation( templateOutput.outputs, // With this we are setting the output citation's URL to that of the // target Webpage object, which does not follow redirects. // We may change this to the final response URL, but there may be cases // where we do not want to do that (see T210871). // Alternatively, we may let users manually change this using a URL // template field. templateOutput.target.url.href, baseCitation ); } let fields: FieldInfo[] | undefined; if (templateFieldInfo) { ......@@ -216,6 +214,7 @@ export class Domain { required: fieldOutput.required, procedures, output: fieldOutput.output, valid: fieldOutput.valid, applicable: fieldOutput.applicable, }; return fieldInfo; ......@@ -309,7 +308,7 @@ type Translation = { path: string | undefined; // undefined for fallback template fields?: FieldInfo[]; }; citation: WebToCitCitation; citation: WebToCitCitation | undefined; timestamp: string; }; ......@@ -325,7 +324,8 @@ type FieldInfo = { transformations: Array<TransformationDefinition & { output: StepOutput }>; output: StepOutput; }[]; output: Array<string | null>; // this is a validated output; no need to have separate valid property output: StepOutput; valid: boolean; applicable: boolean; }; ...... ......@@ -112,13 +112,22 @@ describe("Use an applicable template", () => { expect(output.applicable).toBe(true); }); it("skips non-applicable templates", async () => { it("skips non-applicable templates by default", async () => { const output = ( await configuration.translateWith(target, paths) )[0] as TemplateOutput; expect(output.template.label).toBe("second template"); }); it("optionally returns non-applicable template outputs", async () => { const outputs = (await configuration.translateWith(target, paths, { onlyApplicable: false, })) as TemplateOutput[]; expect(outputs.length).toBe(2); expect(outputs[0].applicable).toBe(false); expect(outputs[1].applicable).toBe(true); }); it("outputs the expected results", async () => { const output = ( await configuration.translateWith(target, paths) ...... ......@@ -178,7 +178,12 @@ export class TemplateConfiguration extends DomainConfiguration< async translateWith( target: Webpage, paths: string[], { useFallback = true, preferSamePath = true, tryAllTemplates = false } = {} { useFallback = true, preferSamePath = true, tryAllTemplates = false, onlyApplicable = true, } = {} ): Promise<TemplateOutput[]> { const templates: BaseTranslationTemplate[] = this.get(paths); if (preferSamePath) { ......@@ -209,6 +214,8 @@ export class TemplateConfiguration extends DomainConfiguration< if (output.applicable) { outputs.push(output); break; } else if (!onlyApplicable) { outputs.push(output); } } } ...... ......@@ -116,6 +116,23 @@ it("marks empty outputs as invalid", async () => { expect(fieldOutput.valid).toBe(false); }); it("always returns field output, even if invalid", async () => { const field = new TemplateField({ fieldname: "itemType", required: true, procedures: [ { selections: [{ type: "fixed", config: "invalidType" }], transformations: [], }, ], }); const target = new Webpage("https://example.com/target"); const output = await field.translate(target); expect(output.output).toEqual(["invalidType"]); expect(output.valid).toBe(false); }); it("constructor optionally skips invalid procedure definitions", () => { const warnSpy = jest.spyOn(log, "warn").mockImplementation(); const definition: unknown = { ...... ......@@ -5,7 +5,6 @@ import { MediaWikiBaseFieldCitation, MediaWikiCreator } from "../citoid"; import { TemplateFieldDefinition, TemplateFieldOutput, ProcedureOutput, ProcedureDefinition, } from "../types"; import log from "loglevel"; ......@@ -77,16 +76,24 @@ export class TemplateField extends TranslationField { const procedureOutputs = await Promise.all( this.procedures.map((procedure) => procedure.translate(target)) ); const combinedOutput = ([] as string[]).concat( let combinedOutput = ([] as string[]).concat( ...procedureOutputs.map((output) => output.output.procedure) ); const output = this.validate(combinedOutput); const valid = output.length > 0 && output.every((value) => value !== null); // if field does not expect an array for output, join output values if (!this.isArray && combinedOutput.length > 0) { combinedOutput = [combinedOutput.join()]; } combinedOutput = combinedOutput.map((value) => value.trim()); const valid = this.validate(combinedOutput); const fieldOutput: TemplateFieldOutput = { fieldname: this.name, required: this.required, procedureOutputs: procedureOutputs, output, output: combinedOutput, valid, applicable: valid || !this.required, control: this.isControl, ......@@ -102,20 +109,10 @@ export class TemplateField extends TranslationField { }; } private validate( output: ProcedureOutput["output"]["procedure"] ): Array<string | null> { if (!this.isArray && output.length) { // alternatively, consider: // a. keep first element only // b. return [null] (invalid) output = [output.join()]; } const validatedOutput = output.map((output) => { output = output.trim(); return this.pattern.test(output) ? output : null; }); return validatedOutput; private validate(output: TemplateFieldOutput["output"]): boolean { const valid = output.length > 0 && output.every((value) => this.pattern.test(value)); return valid; } } ......@@ -140,11 +137,6 @@ export function outputToCitation( // ignore invalid and control fields if (field.valid && !field.control) { const fieldName = field.fieldname; // fixme: reconsider whether field output should accept null values // see T302024 if (field.output.some((value) => value === null)) { throw new Error(`Unexpected non-string value in valid field output`); } fields.set(fieldName, field.output as string[]); } } ...... ......@@ -178,8 +178,8 @@ export type TemplateOutput = { export type TemplateFieldOutput = { fieldname: FieldName; procedureOutputs: ProcedureOutput[]; output: Array<string | null>; // todo: change Array<string>, see T302024 valid: boolean; // todo: remove, see T302024 output: StepOutput; valid: boolean; required: boolean; applicable: boolean; // valid || !required control: boolean; ...... Supports Markdown 0% or . You are about to add 0 people to the discussion. Proceed with caution. Finish editing this message first! Please register or to comment
__label__pos
0.999928
1. 首页 > 排行博客 > vb随机数生成(VB随机数生成器) vb随机数生成(VB随机数生成器) VB随机数生成器 介绍 随机数是各种计算应用中非常重要的一个部分。VB语言中提供了很多种产生随机数的函数,其中最常用的是Rnd函数和Randomize函数,本文将详细介绍这两个函数的使用方法,以及如何生成各种类型的随机数。 Rnd函数和Randomize函数 Rnd函数是VB中产生随机数的常用函数,它可以生成一个大于等于0小于1的随机实数。例如,以下代码会产生一个0到1之间的随机数: Dim number As Double number = Rnd() 另一个与Rand函数相关的函数是Randomize函数,这个函数的用途是重新洗牌VB内部的随机数发生器,使得每次调用Rnd函数时都会产生一个新的随机数序列。代码如下: Randomize Dim number As Double number = Rnd() 生成整数型随机数 产生随机整数可以通过Rnd和Int函数相结合来实现,具体代码如下: Dim number As Integer number = Int(Rnd() * 100) '生成0到99之间的随机整数 如果想要生成其他范围内的整数,只需要调整Int函数后面的参数就可以了。 生成指定范围内的随机数 如果需要生成一个指定范围内的随机实数,可以使用如下代码: Dim number As Double number = Rnd() * (maxValue - minValue) + minValue 其中maxValue和minValue分别为所需范围的上限和下限。 生成正态分布随机数 正态分布随机数是统计学中非常重要的一个概念,其概率密度函数呈钟形曲线。如果需要生成正态分布随机数,可以使用如下代码: Public Function GetNormalRandom(ByVal mean As Double, ByVal stdDev As Double) As Double Randomize Dim u1 As Double Dim u2 As Double Dim a As Double Dim b As Double Dim c As Double Dim x As Double Dim y As Double Do u1 = Rnd() u2 = Rnd() a = 2 * u1 - 1 b = 2 * u2 - 1 c = a * a + b * b Loop Until c < 1 x = a * (-2 * Log(c) / c) ^ 0.5 y = b * (-2 * Log(c) / c) ^ 0.5 GetNormalRandom = mean + stdDev * x End Function 其中mean为正态分布随机数的期望,stdDev为标准差。 结语 VB中提供的随机数生成函数可以满足各种随机数的需求,本文介绍了Rnd函数、Randomize函数以及生成整数型随机数、指定范围随机数、正态分布随机数的方法,希望对大家有所帮助。 版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至[email protected] 举报,一经查实,本站将立刻删除。 联系我们 工作日:10:00-18:30,节假日休息
__label__pos
0.886542
Puppet and LDAP Integration I was on the main page for MaaS, and I was seeing that MaaS has integration with Puppet and LDAP. However I can’t seem to find any documentation on those things. I’m sure the documentation exists somewhere, so could someone please point me towards these resources? Sorry if this is a bit of a dumb question but from looking around there’s not really an obvious link to find these resources. Thanks. C.lafluer, I didn’t find anything about setting up LDAP or SAML, but I dig dig around in the /etc/maas/preseeds/ directory and find a method of adding some scripting. In that directory you’ll find a few files that control the enlistment, commissioning and deployment phases. You want to look at the curtin files. https://docs.maas.io/2.5/en/nodes-custom Under late_commands you can use curtin to run commands on the server as final prep. So, I’m running salt and added late_commands: 30_packages: ["curtin", "in-target", "--", "sh", "-c", "/usr/bin/apt install git"] 40_clone: ["curtin", "in-target", "--", "sh", "-c", "git clone https://<my-private-url>/configuration-engineering/preseed-bootstrap.git /tmp/preseed-bootstrap"] 50_salt_install: ["curtin", "in-target", "--", "sh", "-c", "/bin/chmod +x /tmp/preseed-bootstrap/salt-ubuntu.sh; /tmp/preseed-bootstrap/salt-ubuntu.sh"] So, basically, I have a git repo with some boot strap scripts… but you could just curl or wget a script, chmod it and then run it. Does that make sense? I hope it helps and I’m ready to buy MaaS if I can figure out the LDAP or SAML integration.
__label__pos
0.585142
Changeset 5866 for icGREP Ignore: Timestamp: Feb 8, 2018, 2:11:29 PM (13 months ago) Author: cameron Message: Revised canonical form for nullable expressions; extended star-normal xfrm Location: icGREP/icgrep-devel/icgrep/re Files: 3 edited Legend: Unmodified Added Removed • icGREP/icgrep-devel/icgrep/re/re_alt.h r5835 r5866   7171 7272    bool nullable = false; 73     RE * emptySeq = nullptr;  73    RE * nullableSeq = nullptr; 7474    for (auto i = begin; i != end; ++i) { 7575        if (CC * cc = llvm::dyn_cast<CC>(*i)) {   8282                if (CC * cc = llvm::dyn_cast<CC>(a)) { 8383                    combineCC(cc); 84                 } else if (isEmptySeq(a)) {  84                } else if (isEmptySeq(a) && !nullable) { 8585                    nullable = true; 86                     emptySeq = a;  86                    nullableSeq = a; 8787                } else { 8888                    newAlt->push_back(a);   9191        } else if (const Rep * rep = llvm::dyn_cast<Rep>(*i)) { 9292            if (rep->getLB() == 0) { 93                 nullable = true; 94                 newAlt->push_back(makeRep(rep->getRE(), 1, rep->getUB()));  93                if (nullable) {  94                    // Already have a nullable case.  95                    newAlt->push_back(makeRep(rep->getRE(), 1, rep->getUB()));  96                }  97                else {  98                    // This will be the nullable case.  99                    nullableSeq = *i;  100                    nullable = true;  101                } 95102            } else { 96103                newAlt->push_back(*i); 97104            } 98105        } else if (isEmptySeq(*i)) { 99             nullable = true; 100             emptySeq = *i;  106            if (!nullable) {  107                nullable = true;  108                nullableSeq = *i;  109            } 101110        } else { 102111            newAlt->push_back(*i);   105114    newAlt->insert(newAlt->end(), CCs.begin(), CCs.end()); 106115    if (nullable) { 107         if (emptySeq == nullptr) { 108             emptySeq = makeSeq();  116        if (nullableSeq == nullptr) {  117            nullableSeq = makeSeq(); 109118        } 110         newAlt->push_back(emptySeq);  119        newAlt->push_back(nullableSeq); 111120    } 112121    return newAlt->size() == 1 ? newAlt->front() : newAlt; • icGREP/icgrep-devel/icgrep/re/re_star_normal.cpp r5835 r5866   1818namespace re { 1919 20 RE * RE_Star_Normal::optimize(RE * re) { 21     if (Alt * alt = dyn_cast<Alt>(re)) { 22         std::vector<RE *> list; 23         list.reserve(alt->size()); 24         for (RE * re : *alt) { 25             list.push_back(optimize(re)); 26         } 27         re = makeAlt(list.begin(), list.end()); 28     } else if (Seq * seq = dyn_cast<Seq>(re)) { 29         RE * head = *(seq->begin()); 30         RE * tail = makeSeq(seq->begin() + 1, seq->end()); 31         const auto headNullable = RE_Nullable::isNullable(head); 32         head = headNullable ? optimize(head) : star_normal(head); 33         const auto tailNullable = RE_Nullable::isNullable(tail); 34         tail = tailNullable ? optimize(tail) : star_normal(tail); 35         if (LLVM_UNLIKELY(headNullable && tailNullable)) { 36             re = makeAlt({head, tail}); 37         } else { 38             re = makeSeq({head, tail}); 39         } 40     } else if (Assertion * a = dyn_cast<Assertion>(re)) { 41         re = makeAssertion(optimize(a->getAsserted()), a->getKind(), a->getSense()); 42     } else if (Rep * rep = dyn_cast<Rep>(re)) { 43         RE * const expr = optimize(rep->getRE()); 44         if (rep->getLB() == 0 && rep->getUB() == Rep::UNBOUNDED_REP) { 45             re = expr; 46         } else { 47             re = makeRep(expr, rep->getLB(), rep->getUB()); 48         } 49     } else if (Diff * diff = dyn_cast<Diff>(re)) { 50         re = makeDiff(optimize(diff->getLH()), optimize(diff->getRH())); 51     } else if (Intersect * e = dyn_cast<Intersect>(re)) { 52         re = makeIntersect(optimize(e->getLH()), optimize(e->getRH())); 53     } else if (Name * name = dyn_cast<Name>(re)) { 54         if (name->getDefinition()) { 55             name->setDefinition(optimize(name->getDefinition()));  20RE * RE_Star_Normal::star_rule(RE * re) {  21    if (Seq * seq = dyn_cast<Seq>(re)) {  22        if (RE_Nullable::isNullable(re)) {  23            std::vector<RE *> list;  24            list.reserve(seq->size());  25            for (RE * r : *seq) {  26                if (Rep * rep = dyn_cast<Rep>(r)) {  27                    if (rep->getLB() == 0) {  28                        list.push_back(rep->getRE());  29                    }  30                } else if (!isEmptySeq(r)) {  31                    list.push_back(r);  32                }  33            }  34            return makeAlt(list.begin(), list.end()); 5635        } 5736    }   7756        re = makeAssertion(star_normal(a->getAsserted()), a->getKind(), a->getSense()); 7857    } else if (Rep * rep = dyn_cast<Rep>(re)) { 79          if (rep->getLB() == 0 && rep->getUB() == Rep::UNBOUNDED_REP) { 80             RE * expr = optimize(rep->getRE()); 81             re = makeRep(expr, 0, rep->getUB());  58        RE * expr = star_normal(rep->getRE());  59        if (rep->getLB() == 0 && rep->getUB() == Rep::UNBOUNDED_REP) {  60            re = makeRep(star_rule(expr), 0, rep->getUB()); 8261        } else { 83             RE * expr = star_normal(rep->getRE()); 8462            re = makeRep(expr, rep->getLB(), rep->getUB()); 8563        } • icGREP/icgrep-devel/icgrep/re/re_star_normal.h r5835 r5866   2020        static RE * star_normal(RE * re); 2121private: 22     static RE * optimize(RE * re);  22    static RE * star_rule(RE * re); 2323}; 2424 Note: See TracChangeset for help on using the changeset viewer.
__label__pos
0.999964
blob: e2ca7dd198390ebc923facd690904faeb8300e62 [file] [log] [blame] // Copyright (c) 2012 The Chromium Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #ifndef UI_NATIVE_THEME_NATIVE_THEME_H_ #define UI_NATIVE_THEME_NATIVE_THEME_H_ #include <map> #include "base/containers/flat_map.h" #include "base/macros.h" #include "base/observer_list.h" #include "build/build_config.h" #include "cc/paint/paint_canvas.h" #include "third_party/skia/include/core/SkColor.h" #include "ui/base/models/menu_separator_types.h" #include "ui/gfx/geometry/rect.h" #include "ui/gfx/geometry/size.h" #include "ui/gfx/native_widget_types.h" #include "ui/native_theme/caption_style.h" #include "ui/native_theme/native_theme_color_id.h" #include "ui/native_theme/native_theme_export.h" #include "ui/native_theme/native_theme_observer.h" namespace gfx { class Rect; class Size; } namespace ui { // This class supports drawing UI controls (like buttons, text fields, lists, // comboboxes, etc) that look like the native UI controls of the underlying // platform, such as Windows or Linux. It also supplies default colors for // dialog box backgrounds, etc., which are obtained from the system theme where // possible. // // The supported control types are listed in the Part enum. These parts can be // in any state given by the State enum, where the actual definition of the // state is part-specific. The supported colors are listed in the ColorId enum. // // Some parts require more information than simply the state in order to be // drawn correctly, and this information is given to the Paint() method via the // ExtraParams union. Each part that requires more information has its own // field in the union. // // NativeTheme also supports getting the default size of a given part with // the GetPartSize() method. class NATIVE_THEME_EXPORT NativeTheme { public: // The part to be painted / sized. enum Part { kCheckbox, #if defined(OS_LINUX) && !defined(OS_CHROMEOS) kFrameTopArea, #endif kInnerSpinButton, kMenuList, kMenuPopupBackground, #if defined(OS_WIN) kMenuCheck, kMenuCheckBackground, kMenuPopupArrow, kMenuPopupGutter, #endif kMenuPopupSeparator, kMenuItemBackground, kProgressBar, kPushButton, kRadio, // The order of the arrow enums is important, do not change without also // changing the code in platform implementations. kScrollbarDownArrow, kScrollbarLeftArrow, kScrollbarRightArrow, kScrollbarUpArrow, kScrollbarHorizontalThumb, kScrollbarVerticalThumb, kScrollbarHorizontalTrack, kScrollbarVerticalTrack, kScrollbarHorizontalGripper, kScrollbarVerticalGripper, // The corner is drawn when there is both a horizontal and vertical // scrollbar. kScrollbarCorner, kSliderTrack, kSliderThumb, kTabPanelBackground, kTextField, kTrackbarThumb, kTrackbarTrack, kWindowResizeGripper, kMaxPart, }; // The state of the part. enum State { // IDs defined as specific values for use in arrays. kDisabled = 0, kHovered = 1, kNormal = 2, kPressed = 3, kNumStates = kPressed + 1, }; // OS-level preferred color scheme. (Ex. high contrast or dark mode color // preference.) enum PreferredColorScheme { kDark, kLight, kMaxValue = kLight, }; // IMPORTANT! // This enum is reporting in metrics. Do not reorder; add additional values at // the end. // // This represents the OS-level high contrast theme. kNone unless the default // system color scheme is kPlatformHighContrast. enum class PlatformHighContrastColorScheme { kNone = 0, kDark = 1, kLight = 2, kMaxValue = kLight, }; // The color scheme used for painting the native controls. enum class ColorScheme { kDefault, kLight, kDark, kPlatformHighContrast, // When the platform is providing HC colors (eg. // Win) }; // This enum represents the available unique security chip color states. enum class SecurityChipColorId { DEFAULT, SECURE, SECURE_WITH_CERT, DANGEROUS, }; // Each structure below holds extra information needed when painting a given // part. struct ButtonExtraParams { bool checked; bool indeterminate; // Whether the button state is indeterminate. bool is_default; // Whether the button is default button. bool is_focused; bool has_border; int classic_state; // Used on Windows when uxtheme is not available. SkColor background_color; float zoom; }; struct FrameTopAreaExtraParams { // Distinguishes between active (foreground) and inactive // (background) window frame styles. bool is_active; bool incognito; // True when Chromium renders the titlebar. False when the window // manager renders the titlebar. bool use_custom_frame; // If the NativeTheme will paint a solid color, it should use // |default_background_color|. SkColor default_background_color; }; struct InnerSpinButtonExtraParams { bool spin_up; bool read_only; int classic_state; // Used on Windows when uxtheme is not available. }; struct MenuArrowExtraParams { bool pointing_right; // Used for the disabled state to indicate if the item is both disabled and // selected. bool is_selected; }; struct MenuCheckExtraParams { bool is_radio; // Used for the disabled state to indicate if the item is both disabled and // selected. bool is_selected; }; struct MenuSeparatorExtraParams { const gfx::Rect* paint_rect; MenuSeparatorType type; }; struct MenuItemExtraParams { bool is_selected; int corner_radius; }; struct MenuListExtraParams { bool has_border; bool has_border_radius; int arrow_x; int arrow_y; int arrow_size; SkColor arrow_color; SkColor background_color; int classic_state; // Used on Windows when uxtheme is not available. }; struct MenuBackgroundExtraParams { int corner_radius; }; struct ProgressBarExtraParams { double animated_seconds; bool determinate; int value_rect_x; int value_rect_y; int value_rect_width; int value_rect_height; }; struct ScrollbarArrowExtraParams { bool is_hovering; float zoom; bool right_to_left; }; struct ScrollbarTrackExtraParams { bool is_upper; int track_x; int track_y; int track_width; int track_height; int classic_state; // Used on Windows when uxtheme is not available. }; enum ScrollbarOverlayColorTheme { ScrollbarOverlayColorThemeDark, ScrollbarOverlayColorThemeLight }; struct ScrollbarThumbExtraParams { bool is_hovering; ScrollbarOverlayColorTheme scrollbar_theme; }; #if defined(OS_APPLE) enum ScrollbarOrientation { // Vertical scrollbar on the right side of content. kVerticalOnRight, // Vertical scrollbar on the left side of content. kVerticalOnLeft, // Horizontal scrollbar (on the bottom of content). kHorizontal, }; // A unique set of scrollbar params. Currently needed for Mac. struct ScrollbarExtraParams { bool is_hovering; bool is_overlay; ScrollbarOverlayColorTheme scrollbar_theme; ScrollbarOrientation orientation; // Used on Mac for drawing gradients. }; #endif struct SliderExtraParams { bool vertical; bool in_drag; int thumb_x; int thumb_y; float zoom; bool right_to_left; }; struct TextFieldExtraParams { bool is_text_area; bool is_listbox; SkColor background_color; bool is_read_only; bool is_focused; bool fill_content_area; bool draw_edges; int classic_state; // Used on Windows when uxtheme is not available. bool has_border; bool auto_complete_active; }; struct TrackbarExtraParams { bool vertical; int classic_state; // Used on Windows when uxtheme is not available. }; union NATIVE_THEME_EXPORT ExtraParams { ExtraParams(); ExtraParams(const ExtraParams& other); ButtonExtraParams button; FrameTopAreaExtraParams frame_top_area; InnerSpinButtonExtraParams inner_spin; MenuArrowExtraParams menu_arrow; MenuCheckExtraParams menu_check; MenuItemExtraParams menu_item; MenuSeparatorExtraParams menu_separator; MenuListExtraParams menu_list; MenuBackgroundExtraParams menu_background; ProgressBarExtraParams progress_bar; ScrollbarArrowExtraParams scrollbar_arrow; #if defined(OS_APPLE) ScrollbarExtraParams scrollbar_extra; #endif ScrollbarTrackExtraParams scrollbar_track; ScrollbarThumbExtraParams scrollbar_thumb; SliderExtraParams slider; TextFieldExtraParams text_field; TrackbarExtraParams trackbar; }; // Return the size of the part. virtual gfx::Size GetPartSize(Part part, State state, const ExtraParams& extra) const = 0; virtual float GetBorderRadiusForPart(Part part, float width, float height, float zoom) const; // Paint the part to the canvas. virtual void Paint( cc::PaintCanvas* canvas, Part part, State state, const gfx::Rect& rect, const ExtraParams& extra, ColorScheme color_scheme = ColorScheme::kDefault) const = 0; // Paint part during state transition, used for overlay scrollbar state // transition animation. virtual void PaintStateTransition(cc::PaintCanvas* canvas, Part part, State startState, State endState, double progress, const gfx::Rect& rect, ScrollbarOverlayColorTheme theme) const {} // Returns whether the theme uses a nine-patch resource for the given part. // If true, calling code should always paint into a canvas the size of which // can be gotten from GetNinePatchCanvasSize. virtual bool SupportsNinePatch(Part part) const = 0; // If the part paints into a nine-patch resource, the size of the canvas // which should be painted into. virtual gfx::Size GetNinePatchCanvasSize(Part part) const = 0; // If the part paints into a nine-patch resource, the rect in the canvas // which defines the center tile. This is the tile that should be resized out // when the part is resized. virtual gfx::Rect GetNinePatchAperture(Part part) const = 0; // Colors for GetSystemColor(). enum ColorId { #define OP(enum_name) enum_name NATIVE_THEME_COLOR_IDS, #undef OP kColorId_NumColors, }; enum class SystemThemeColor { kNotSupported, kButtonFace, kButtonText, kGrayText, kHighlight, kHighlightText, kHotlight, kMenuHighlight, kScrollbar, kWindow, kWindowText, kMaxValue = kWindowText, }; // Return a color from the system theme. virtual SkColor GetSystemColor( ColorId color_id, ColorScheme color_scheme = ColorScheme::kDefault) const; // Returns a shared instance of the native theme that should be used for web // rendering. Do not use it in a normal application context (i.e. browser). // The returned object should not be deleted by the caller. This function is // not thread safe and should only be called from the UI thread. Each port of // NativeTheme should provide its own implementation of this function, // returning the port's subclass. static NativeTheme* GetInstanceForWeb(); // Returns a shared instance of the default native theme for native UI. static NativeTheme* GetInstanceForNativeUi(); // Returns a shared instance of the native theme for incognito UI. static NativeTheme* GetInstanceForDarkUI(); // Whether OS-level dark mode is available in the current OS. static bool SystemDarkModeSupported(); // Add or remove observers to be notified when the native theme changes. void AddObserver(NativeThemeObserver* observer); void RemoveObserver(NativeThemeObserver* observer); // Notify observers of native theme changes. void NotifyObservers(); // Returns whether this NativeTheme uses higher-contrast colors, controlled by // system accessibility settings and the system theme. virtual bool UsesHighContrastColors() const; // Returns the PlatformHighContrastColorScheme used by the OS. Returns a value // other than kNone only if the default system color scheme is // kPlatformHighContrast. PlatformHighContrastColorScheme GetPlatformHighContrastColorScheme() const; // Returns true when the NativeTheme uses a light-on-dark color scheme. If // you're considering using this function to choose between two hard-coded // colors, you probably shouldn't. Instead, use GetSystemColor(). virtual bool ShouldUseDarkColors() const; // Returns the OS-level user preferred color scheme. See the comment for // CalculatePreferredColorScheme() for details on how preferred color scheme // is calculated. virtual PreferredColorScheme GetPreferredColorScheme() const; // Returns the system's caption style. virtual base::Optional<CaptionStyle> GetSystemCaptionStyle() const; virtual ColorScheme GetDefaultSystemColorScheme() const; virtual const std::map<SystemThemeColor, SkColor>& GetSystemColors() const; base::Optional<SkColor> GetSystemThemeColor( SystemThemeColor theme_color) const; bool HasDifferentSystemColors( const std::map<SystemThemeColor, SkColor>& colors) const; void set_use_dark_colors(bool should_use_dark_colors) { should_use_dark_colors_ = should_use_dark_colors; } void set_high_contrast(bool is_high_contrast) { is_high_contrast_ = is_high_contrast; } void set_preferred_color_scheme(PreferredColorScheme preferred_color_scheme) { preferred_color_scheme_ = preferred_color_scheme; } void set_system_colors(const std::map<SystemThemeColor, SkColor>& colors); // Updates the state of dark mode, high contrast, preferred color scheme, // and the map of system colors. Returns true if NativeTheme was updated // as a result, or false if the state of NativeTheme was untouched. bool UpdateSystemColorInfo( bool is_dark_mode, bool is_high_contrast, const base::flat_map<SystemThemeColor, uint32_t>& colors); // On certain platforms, currently only Mac, there is a unique visual for // pressed states. virtual SkColor GetSystemButtonPressedColor(SkColor base_color) const; // Assign the focus-ring-appropriate alpha value to the provided base_color. virtual SkColor FocusRingColorForBaseColor(SkColor base_color) const; protected: explicit NativeTheme(bool should_only_use_dark_colors); virtual ~NativeTheme(); // Whether high contrast is forced via command-line flag. bool IsForcedHighContrast() const; // Whether dark mode is forced via command-line flag. bool IsForcedDarkMode() const; // Calculates and returns the current user preferred color scheme. The // base behavior is to set preferred color scheme to light or dark depending // on the state of dark mode. // // Some platforms override this behavior. On Windows, for example, we also // look at the high contrast setting. If high contrast is enabled, the // preferred color scheme calculation will ignore the state of dark mode. // Instead, preferred color scheme will be light, or dark depending on the OS // high contrast theme. If high contrast is off, the preferred color scheme // calculation will follow the default behavior. virtual PreferredColorScheme CalculatePreferredColorScheme() const; // A function to be called by native theme instances that need to set state // or listeners with the webinstance in order to provide correct native // platform behaviors. virtual void ConfigureWebInstance() {} // Allows one native theme to observe changes in another. For example, the // web native theme for Windows observes the corresponding ui native theme in // order to receive changes regarding the state of dark mode, high contrast, // and preferred color scheme. class NATIVE_THEME_EXPORT ColorSchemeNativeThemeObserver : public NativeThemeObserver { public: ColorSchemeNativeThemeObserver(NativeTheme* theme_to_update); ~ColorSchemeNativeThemeObserver() override; private: // ui::NativeThemeObserver: void OnNativeThemeUpdated(ui::NativeTheme* observed_theme) override; // The theme that gets updated when OnNativeThemeUpdated() is called. NativeTheme* const theme_to_update_; DISALLOW_COPY_AND_ASSIGN(ColorSchemeNativeThemeObserver); }; mutable std::map<SystemThemeColor, SkColor> system_colors_; private: // Observers to notify when the native theme changes. base::ObserverList<NativeThemeObserver>::Unchecked native_theme_observers_; bool should_use_dark_colors_ = false; bool is_high_contrast_ = false; PreferredColorScheme preferred_color_scheme_ = PreferredColorScheme::kLight; DISALLOW_COPY_AND_ASSIGN(NativeTheme); }; } // namespace ui #endif // UI_NATIVE_THEME_NATIVE_THEME_H_
__label__pos
0.960224
OCaml - Function - Creating an easy function In this tutorial we will see how to create a function in OCaml language. It will be an easy function to understand how it works. We will create a function that returns an int + 1. Here the code: # let myFunc myVar = myVar + 1;; Notice that you can also create a function like this: # let myFunc = fun myVar -> myVar + 1;; Press enter, it will display:   val myFunc : int -> int = <fun>   It means that the myFunc value is the name of the function, the first int is the argument (in our case the myVar value). The second int, with an arrow (->) before, means that the return value will be of type int. And the last element, "= <fun>" tells us that this is a function.   So to use this function, simply type it: # myFunc 4;; Result: - : int = 5 Well done, it was your first function in OCaml! wink     Add new comment Plain text • No HTML tags allowed. • Lines and paragraphs break automatically.
__label__pos
0.935052
Cómo cerrar una ventana de Tkinter con un botón 1. root.destroy() Método de clase para cerrar la ventana de Tkinter 2. destroy() Método no-clase para cerrar la ventana de Tkinter 3. Asociar la función root.destroy al atributo command del botón directamente 4. root.quit para cerrar la ventana de Tkinter Podemos utilizar una función o comando adjunto a un botón de la interfaz gráfica de Tkinter para cerrar la ventana de Tkinter cuando el usuario hace clic en ella. root.destroy() Método de clase para cerrar la ventana de Tkinter try: import Tkinter as tk except: import tkinter as tk class Test(): def __init__(self): self.root = tk.Tk() self.root.geometry('100x50') button = tk.Button(self.root, text = 'Click and Quit', command=self.quit) button.pack() self.root.mainloop() def quit(self): self.root.destroy() app = Test() destroy() destruye o cierra la ventana. Tkinter cierra una ventana con un botón destroy() Método no-clase para cerrar la ventana de Tkinter try: import Tkinter as tk except: import tkinter as tk root = tk.Tk() root.geometry("100x50") def close_window(): root.destroy() button = tk.Button(text = "Click and Quit", command = close_window) button.pack() root.mainloop() Asociar la función root.destroy al atributo command del botón directamente Podríamos vincular directamente la función root.destroy al atributo del botón command sin definir más la función extra close_window. try: import Tkinter as tk except: import tkinter as tk root = tk.Tk() root.geometry("100x50") button = tk.Button(text = "Click and Quit", command = root.destroy) button.pack() root.mainloop() root.quit para cerrar la ventana de Tkinter root.quit deja no sólo la ventana de Tkinter sino más precisamente todo el intérprete de Tcl. Puede ser usado si su aplicación Tkinter no se inicia desde Python en reposo. No es recomendable utilizar root.quit si su aplicación Tkinter es llamada desde el idle porque quit no sólo matará su aplicación Tkinter sino también el idle porque el idle es también una aplicación Tkinter. try: import Tkinter as tk except: import tkinter as tk root = tk.Tk() root.geometry("100x50") button = tk.Button(text = "Click and Quit", command = root.quit) button.pack() root.mainloop() Artículo relacionado - Tkinter Button • Cómo pasar argumentos al comando del botón Tkinter • Cómo cambiar el estado del Botón Tkinter • Cómo crear una nueva ventana haciendo clic en un botón de Tkinter
__label__pos
0.977753
HaxePunk shooting game tutorial: Part 3 posted on Sep 28, 2014 in Haxe, OpenFL, Game design, HaxePunk HaxePunk shooter game tutorial It's time to add destroyable enemies to our game. Each enemy is an entity with a random speed and spawn position, moving downwards until they are shot down or until they exit the screen. The whole enemy spawn handling is not a part of the PlayerShip logic, so the code will go in the MainScene.hx class. We'll need to handle random spawning, movement, and collision with the player's bullets. Let's first go to the Bullet.hx class and add a few things which will help us implement collision detection. In HaxePunk, collidable Entities need to have a hitbox. This is a simple rectangle which determines the collidable area of this Entity. We can also give all our Bullets a common type "bullet", so that we can check collision of enemy ships with all bullets. Here's the full Bullet.hx code: package ; import com.haxepunk.Entity; /** * Bullet entity.s * @author Kirill Poletaev */ class Bullet extends Entity { public function new(g:Dynamic) { super(); graphic = g; type = "bullet"; setHitbox(14, 24, 0, 0); } override public function update() { this.y -= 12; if (this.y < -50) { scene.remove(this); } } } The next step is to add the enemy spawning logic to the MainScene.hx class. Full code: import com.haxepunk.graphics.atlas.TextureAtlas; import com.haxepunk.graphics.Image; import com.haxepunk.HXP; import com.haxepunk.Scene; import com.haxepunk.utils.Input; import com.haxepunk.utils.Key; class MainScene extends Scene { private var player:PlayerShip; private var spawnInterval:Int; private var enemyGraphic:Image; public function new() { super(); Input.define("up", [Key.UP, Key.W]); Input.define("down", [Key.DOWN, Key.S]); Input.define("left", [Key.LEFT, Key.A]); Input.define("right", [Key.RIGHT, Key.D]); var atlas:TextureAtlas = TextureAtlas.loadTexturePacker("atlas/atlas.xml"); enemyGraphic = new Image(atlas.getRegion("enemyShip")); player = new PlayerShip(atlas); add(player); spawnInterval = Math.round(Math.random() * 50)+50; } override public function update() { super.update(); spawnInterval--; if (spawnInterval == 0) { var enemy = new EnemyShip(enemyGraphic); add(enemy); enemy.x = Math.round(Math.random() * (HXP.width-64)); enemy.y = -50; spawnInterval = Math.round(Math.random() * 20)+30; } } } As you can see, we added 2 variables - one for storing graphical data for all enemy ships, one for handling the delay between enemy spawns. Every few frames we create a new EnemyShip entity, and here is its code: package ; import com.haxepunk.Entity; import com.haxepunk.HXP; /** * Enemy ship entity. * @author Kirill Poletaev */ class EnemyShip extends Entity { private var speed:Int; public function new(g:Dynamic) { super(); graphic = g; speed = Math.ceil(Math.random() * 3); setHitbox(64, 48, 0, 0); } override public function update() { this.y += speed; if (this.y > HXP.height) { scene.remove(this); } var collidedEntity = collide("bullet", x, y); if (collidedEntity != null) { scene.remove(this); scene.remove(collidedEntity); } } } As you can see, it's an Entity subclass which receives a graphic value in the constructor as the only parameter, generates a random speed value, and moves downwards every frame. It is important to set the hitbox value for this entity, since we are going to check for collisions in the update() method. If the Entity goes off screen, remove it. If it collides with any Entity with the type "bullet", remove this ship and that bullet. If you test your game now, you'll be able to shoot down the incoming enemy spaceship fleet. HaxePunk shooter game tutorial enemies There are still a lot of things missing from our game, such as healthbars, score, particles and sounds. We'll add enemy health and explosion particles in the next tutorial! 10151
__label__pos
0.896131
1 I've been looking at implementing a library to use the MS5637 Barometric Sensor. I would like to have a C# implementation that I can use on Raspberry Pi 2 running Windows IoT and using the I2C interface. I can write it myself, but don't want to unnecessarily reinvent the wheel. If any of you have a C# library that accomplishes this, please let me know. • Well... I couldn't find what I needed, so I wrote it myself. Please see answer below for implementation written by me. – Curtis Feb 3 '16 at 23:36 2 Ok, I've implemented a C# class that talks very nicely to the MS5637 sensor using I2C and Windows 10 IoT. In order to use the code below, you need to add as references: Universal Windows Windows IoT Extensions for the UWP Windows Mobile Extensions for the UWP Microsoft.NetCore.UniversalWindowsPlatform (NuGet) Here's the code.. Enjoy... using System; using System.Collections.Generic; using System.Diagnostics; using System.Threading; using System.Threading.Tasks; using Windows.Devices.Enumeration; using Windows.Devices.I2c; namespace MS5637 { /* This class was written by: Curtis V. Harrison - [email protected] You can use it all you want, just be nice and keep my name at the top of the file. Sample code for using the 'Windows.Devices.I2c.I2cDevice' can be found at: https://ms-iot.github.io/content/en-US/win10/samples/I2CAccelerometer.htm Data Sheet for MS5637 can be found at: http://www.farnell.com/datasheets/1756129.pdf */ /// <summary> /// MS5637Device class usage /// This class is simple to use: /// /// 1. Create an instance using the default no param constructor, or provide a Slave Address /// The constructors will initialize the sensor and prepare it data retrieval /// 2. Call 'GetTemperaturePressure()' and you will receive a 'TemperaturePressure' object. /// Temperature is in 'farenheit' and pressure is in 'mbars'; /// 3. Use the 'Temperature' or 'Pressure' properties of the TemperaturePressure object as you like. /// 4. Whenever you need updated Temperature or pressure, follow instruction 2 above.. /// </summary> public class MS5637Device : IDisposable { private const int MS5637_SLAVE_ADDRESS = 0x76; private static readonly byte[] MS5637_RESET_COMMAND = {0x1E}; private static readonly byte[] MS5637_COEFFICENTS_COMMAND = {0xA0}; private static readonly byte[] MS5637_COEFFICENTS_END_COMMAND = {0xAE}; private static readonly byte[] MS5637_HI_RES_TEMPERATURE = {0x5A}; private static readonly byte[] MS5637_HI_RES_PRESSURE = {0x4A}; private static readonly byte[] MS5637_DATA_READ_COMMAND = {0x00}; private I2cDevice Sensor { get; set; } private List<ushort> Coefficients { get; set; } private double Temperature { get; set; } private double Pressure { get; set; } /// <summary> /// Default constructor requires no arguments. /// This constructor assumes a SlaveAddress of 0x76 for the MS5637 sensor. /// Constructing this object will initialize it and prepare it for data retrieval. /// </summary> public MS5637Device() : this(MS5637_SLAVE_ADDRESS) { } /// <summary> /// Constructor requires a single byte 'SlaveAddress' as an argument. /// The typical SlaveAddress for the MS5637 sensor is 0x76. /// Constructing this object will initialize it and prepare it for data retrieval. /// </summary> public MS5637Device(byte slaveAddress) { Init(slaveAddress); } /// <summary> /// Call this method whenever you want to get current Temperature and Pressure values. /// This method returns a 'TemperaturePressure' object which in turn has a Temperature and a Pressure /// property. /// </summary> /// <returns></returns> public TemperaturePressure GetTemperaturePressure() { try { GetD1D2(); } catch { SpinWait.SpinUntil(() => false, 500); Sensor.Write(MS5637_RESET_COMMAND); //Reset Device SpinWait.SpinUntil(() => false, 10); Debug.WriteLine("BOOM !!! CRASH !!! RESET !!!"); } return new TemperaturePressure(Temperature, Pressure); } private void Init(int slaveAddress) { InitSensor(slaveAddress).Wait(); byte[] command = new byte[1]; Sensor.Write(MS5637_RESET_COMMAND); //Reset Device SpinWait.SpinUntil(() => false, 10); Coefficients = new List<ushort>(); byte[] coefficent = new byte[2]; for (command[0] = MS5637_COEFFICENTS_COMMAND[0]; command[0] < MS5637_COEFFICENTS_END_COMMAND[0]; command[0] += 2) { coefficent[0] = 0; coefficent[1] = 0; Sensor.WriteRead(command, coefficent); Coefficients.Add((ushort)(coefficent[0] << 8 | coefficent[1])); SpinWait.SpinUntil(() => false, 50); } GetTemperaturePressure(); } private async Task InitSensor(int slaveAddress) { var settings = new I2cConnectionSettings(slaveAddress) {BusSpeed = I2cBusSpeed.FastMode}; string aqs = I2cDevice.GetDeviceSelector(); var dis = await DeviceInformation.FindAllAsync(aqs); Sensor = await I2cDevice.FromIdAsync(dis[0].Id, settings); } private uint ConvertBytesToInt32(byte[] data) { uint value = ((uint)data[0] << 16) | (uint)data[1] << 8 | data[2]; return value; } private void GetD1D2() { byte[] d1PressureBytes = new byte[3]; Sensor.Write(MS5637_HI_RES_PRESSURE); SpinWait.SpinUntil(() => false, 30); Sensor.WriteRead(MS5637_DATA_READ_COMMAND, d1PressureBytes); uint D1 = ConvertBytesToInt32(d1PressureBytes); byte[] d2TemperatureBytes = new byte[3]; Sensor.Write(MS5637_HI_RES_TEMPERATURE); SpinWait.SpinUntil(() => false, 30); Sensor.WriteRead(MS5637_DATA_READ_COMMAND, d2TemperatureBytes); uint D2 = ConvertBytesToInt32(d2TemperatureBytes); CalculateTemperaturePressure(D1, D2); } private void CalculateTemperaturePressure(uint D1, uint D2) { var dT = D2 - Coefficients[5] * Math.Pow(2, 8); var offset = Coefficients[2]*Math.Pow(2, 17) + (dT*Coefficients[4])/Math.Pow(2, 6); var sensitivity = Coefficients[1]*Math.Pow(2, 16) + (dT*Coefficients[3])/Math.Pow(2, 7); var t1 = (2000 + (dT * Coefficients[6]) / Math.Pow(2, 23)) / 100; //Assume temperature is > 20 celsus inside somebody's home var t2 = 5 * Math.Pow(dT, 2) / Math.Pow(2, 38); t1 -= t2; Temperature = t1 * 1.8 + 32; Pressure = (D1*sensitivity/Math.Pow(2, 21) - offset)/Math.Pow(2, 15)/100; } /// <summary> /// Disposes the internal Sensor object when Dispose() is called on this object. /// </summary> public void Dispose() { if (Sensor != null) { Sensor.Dispose(); Sensor = null; } } } /// <summary> /// This is a simple container class which allows for a Temperature and a Pressure /// to be stored and passed. /// </summary> public class TemperaturePressure { public double Temperature { get; set; } public double Pressure { get; set; } public TemperaturePressure(double temperature, double pressure) { Temperature = temperature; Pressure = pressure; } } } 1 The best place at the moment to look for any supporting device libraries like that would be at the Microsoft's IoT Github page. If you cannot find a C# implementation anywhere then its likely there is none yet. You would have to find a C implementation, like for Arduino and try and see how its configured, any caveats, etc and try to port it as you suggest. It would be brilliant if you could get the IoT Git, do your implementation in somnething like "samples/I2CBarometer" - following the conventions (directories, file names. Similar to the Compass Sample, using an interface to allow various chip 'drivers' to be implemented) and do a pull request. You may have to follow some other steps for doing PR. It is likely somebody may accept it, comment on it, etc. Even if takes a while, your code will be there for other to use, in the right place and hopefully indexed by search engines to be found. If you need more help feel free to comment below to notify me (I am not associated with MS but willing to help the community grow). Maybe first get your base C# code going then it can be converted into a Sample and Interfaced ready for Git. Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.965199
What is Cloud Computing? 2425 Cloud Computing To establish an excellent relationship with a partner or a provider, you will need advice that will lead you to a correct decision; we will talk about this later, however, first let’s know the scope of cloud computing, its schemes and service classification. Cloud Computing is a technology that allows both individuals and companies to store files and programs remotely, instead of using hard drives and servers. In fact, today many people use cloud computing without realizing it, either through work or for personal use. Some examples can be: web-based email such as Gmail and Hotmail, communication tools such as Skype, video sites such as YouTube, and sharing music on SoundCloud. What is the scope of cloud computing? Although for some users it is easy to understand technology issues, other consumers need more information and support. Probably you are not yet clear about the concept of cloud computing, for that reason, we will explain it with basic examples. Personal and professional daily life is full of tools and technological products, aimed at creating a better experience in the execution of tasks and entertainment activities ; most work online, clear mentions are: Television (Netflix), email (Outlook), storage (OneDrive, Google Drive), document editing (Google Documents), online music (Spotify), image files (iCloud), and other. The aforementioned applications work thanks to systems backed by cloud computing. Among the main features of cloud computing are: • Development of Applications and other services. • Data analysis and creating models or patterns of predictions in business. • Development and administration of Software. • Storage, backup and recovery of data. • Share videos, photos and audios. • Website hosting services. Also Read – How Cloud Computing Works?  Advantages of cloud computing: advantages of cloud computing 1. Support and confidence The cloud computing allows backup of data stored while disaster recovery, while the objectives of organizations and businesses continue to run. One of the main functions is to host the information in the correct sites to avoid redundancies of information or use of unnecessary spaces. 2. Reduction in technology expenses Cloud computing reduces costs in the acquisition of software and hardware, likewise it optimizes the resources invested in the maintenance of data centers, servers and electrical systems. The people in charge of the IT Department will quickly experience the results. 3. Performance The largest and best-reaching cloud computing services run on global networks of secure data centers, which are constantly updated with efficient and fast hardware. This is an important thing for the IT administration of companies and institutions. 4. Productivity The cloud computing is a very important support for the IT department to direct most important tasks and performance. Through the technology of cloud computing, the maintenance resources of a local data center (hardware configuration, security patches, among other tasks) are drastically reduced. 5. Speed The majority of cloud computing providers are made up of self-service features to provide various computing resources, which allow companies with flexibility and reduction in technological planning. Also Read – A Brief History of Cloud Storage A very strange name Nobody really knows when, but one day, someone decided to draw a cloud to symbolize a network with an indefinite topology, probably on a diagram illustrating an interconnection of networks. Since then, the new symbol has flourished. Everyone has taken up the idea, and especially to represent the largest network, the network of networks: the Internet. Indeed, how to find the right visual metaphor for something that is inherently vague, poorly defined, and therefore unpresentable? In the minds of people, the concept of the cloud, a vaporous and somewhat shapeless volume, with vague contours and variable geometry, perfectly corresponded to the symbolization of a confused mass like the Internet. Quickly, the cloud has become the visual metaphor of the Net par excellence. Cloud computing comes in three flavors: 1. IaaS (infrastructure as a service), infrastructure service This is the low-level version of the cloud service. This is essentially the rental of virtual machines on which the client can install the operating system and software that suit their need. It is also the rental of storage space, for data backups for example. IaaS allows you to expand your infrastructure by slicing the infrastructure of a server farm, or data warehouse. It is up to the customer to maintain the software environment if there is one. 2. PaaS (platform as a service), platform service This release adds the provision of a pre-configured software environment to the infrastructure. For example, in the case of a virtual server, the provider provides, installs, and maintains the operating system. Web hosting, where you rent both infrastructure (disk space, processor, RAM, network connectivity) and a ready-to-use software environment, falls under PaaS. 3. SaaS (software as a service), software service In this case, you rent the use of software, which you will usually access via a web interface. The application is hosted by the provider and runs on its infrastructure. Google mail, Microsoft Office Online, Slack, are examples of SaaS. The Bottom Line Although defining a scenario in the cloud is a task that requires the integral participation of the IT and administrative team, it is important to analyze the amount of internal information, the applied uses, before identifying opportunities for improvement. I am signing off with this, there is more to explore on cloud computing. Have a great time! Leave a Reply !! This site uses Akismet to reduce spam. Learn how your comment data is processed.
__label__pos
0.904147
HTML5 Canvas Image Effects App – Adding Blur HTML5 Canvas Image Effects App – Adding Blur 2 42290 HTML5 Canvas Image Effects App - Adding Blur HTML5 Canvas Image Effects App – Adding Blur. Today we continue HTML5 canvas examples, hope that you remember application which we made quite 2 weeks ago? Sure that this was very interesting to you to investigate image processing functions. Today I decided to continue and added new filter here – Blur. Here are our demo and downloadable package: Live Demo [sociallocker] download in package [/sociallocker] Ok, download the example files and lets start coding ! Step 1. HTML Here are all html of my demo index.html <!DOCTYPE html> <html lang="en" > <head> <link href="css/main.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="js/script.js"></script> <title>HTML5 Canvas Image Effects App - Adding Blur (demo)</title> </head> <body> <div class="example"> <h3><a href="https://www.script-tutorials.com/html5-canvas-image-effects-app/">HTML5 Canvas Image Effects demo (adding blur effect) | Script Tutorials</a></h3> <div class="column1"> <input type="button" onclick="resetToGrayscale()" value="Grayscale" /><br /> <input type="button" onclick="resetToColor()" value="Color" /><br /> <input type="button" onclick="resetToBlur()" value="Blur(1)" /><br /> <input type="button" onclick="reset()" value="Reset" /><br /> <input type="button" onclick="changeGrayValue(0.1)" value="Lighter" /><br /> <input type="button" onclick="changeGrayValue(-0.1)" value="Darker" /><br /> Red: <input type="button" onclick="changeColorValue('er', 10)" value="More" /> <input type="button" onclick="changeColorValue('er', -10)" value="Less" /><br /> Green: <input type="button" onclick="changeColorValue('eg', 10)" value="More" /> <input type="button" onclick="changeColorValue('eg', -10)" value="Less" /><br /> Blue: <input type="button" onclick="changeColorValue('eb', 10)" value="More" /> <input type="button" onclick="changeColorValue('eb', -10)" value="Less" /> Blur: <input type="button" onclick="changeBlurValue(1)" value="More" /> <input type="button" onclick="changeBlurValue(-1)" value="Less" /><br /> </div> <div class="column2"> <canvas id="orig" width="500" height="333"></canvas> <canvas id="panel" width="500" height="333"></canvas> </div> <div style="clear:both;"></div> </div> </body> </html> Make attention, that since today I made 2 canvas objects. First one will contain original image – second one – our effects. Also I added 3 new buttons. First – to switch to Blur effect, next 2 – increase / decrease blur level. Step 2. CSS Here are used CSS styles. css/main.css body{background:#eee;font-family:Verdana, Helvetica, Arial, sans-serif;margin:0;padding:0} .example{background:#FFF;width:730px;font-size:80%;border:1px #000 solid;margin:20px auto;padding:15px;position:relative;-moz-border-radius: 3px;-webkit-border-radius: 3px} h3 { text-align:center; } .column1 { float:left; padding-right: 10px; text-align:right; width:185px; } .column2 { float:left; width:500px; background-color:#888; padding:10px; } input[type=button] { margin:5px; } Step 3. JS Now – most interesting – its inner functionality (HTML5 canvas script). js/script.js var canvas; var context; var iW = 0; // image width var iH = 0; // image height var p1 = 0.99; var p2 = 0.99; var p3 = 0.99; var er = 0; // extra red var eg = 0; // extra green var eb = 0; // extra blue var iBlurRate = 0; var func = 'color'; // last used function window.onload = function() { // creating context for original image canvasOrig = document.getElementById('orig'); contextOrig = canvasOrig.getContext('2d'); // draing original image var imgObj = new Image(); imgObj.onload = function () { iW = this.width; iH = this.height; contextOrig.drawImage(imgObj, 0, 0, iW, iH); // draw the image on the canvas } imgObj.src = 'images/01.jpg'; // creating testing context canvas = document.getElementById('panel'); context = canvas.getContext('2d'); }; function Grayscale() { func = 'grayscale'; // last used function var imgd = contextOrig.getImageData(0, 0, iW, iH); var data = imgd.data; for (var i = 0, n = data.length; i < n; i += 4) { var grayscale = data[i] * p1 + data[i+1] * p2 + data[i+2] * p3; data[i] = grayscale + er; // red data[i+1] = grayscale + eg; // green data[i+2] = grayscale + eb; // blue } context.putImageData(imgd, 0, 0); } function Color() { func = 'color'; // last used function var imgd = contextOrig.getImageData(0, 0, iW, iH); var data = imgd.data; for (var i = 0, n = data.length; i < n; i += 4) { data[i] = data[i]*p1+er; // red data[i+1] = data[i+1]*p2+eg; // green data[i+2] = data[i+2]*p3+eb; // blue } context.putImageData(imgd, 0, 0); } function Blur() { func = 'blur'; // last used function var imgd = contextOrig.getImageData(0, 0, iW, iH); var data = imgd.data; for (br = 0; br < iBlurRate; br += 1) { for (var i = 0, n = data.length; i < n; i += 4) { iMW = 4 * iW; iSumOpacity = iSumRed = iSumGreen = iSumBlue = 0; iCnt = 0; // data of close pixels (from all 8 surrounding pixels) aCloseData = [ i - iMW - 4, i - iMW, i - iMW + 4, // top pixels i - 4, i + 4, // middle pixels i + iMW - 4, i + iMW, i + iMW + 4 // bottom pixels ]; // calculating Sum value of all close pixels for (e = 0; e < aCloseData.length; e += 1) { if (aCloseData[e] >= 0 && aCloseData[e] <= data.length - 3) { iSumOpacity += data[aCloseData[e]]; iSumRed += data[aCloseData[e] + 1]; iSumGreen += data[aCloseData[e] + 2]; iSumBlue += data[aCloseData[e] + 3]; iCnt += 1; } } // apply average values data[i] = (iSumOpacity / iCnt)*p1+er; data[i+1] = (iSumRed / iCnt)*p2+eg; data[i+2] = (iSumGreen / iCnt)*p3+eb; data[i+3] = (iSumBlue / iCnt); } } context.putImageData(imgd, 0, 0); } function reset() { switch(func) { case 'grayscale': resetToGrayscale(); break; case 'color': resetToColor(); break; case 'blur': resetToBlur(); break; } } function changeGrayValue(val) { p1 += val; p2 += val; p3 += val; switch(func) { case 'grayscale': Grayscale(); break; case 'color': Color(); break; case 'blur': Blur(); break; } } function changeColorValue(sobj, val) { switch (sobj) { case 'er': er += val; break; case 'eg': eg += val; break; case 'eb': eb += val; break; } switch(func) { case 'grayscale': Grayscale(); break; case 'color': Color(); break; case 'blur': Blur(); break; } } function changeBlurValue(val) { iBlurRate += val; if (iBlurRate <= 0) Color(); if (iBlurRate > 4) iBlurRate = 4; Blur(); } function resetToColor() { p1 = 1; p2 = 1; p3 = 1; er = eg = eb = 0; iBlurRate = 0; Color(); } function resetToBlur() { p1 = 1; p2 = 1; p3 = 1; er = eg = eb = 0; iBlurRate = 1; Blur(); } function resetToGrayscale() { p1 = 0.3; p2 = 0.59; p3 = 0.11; er = eg = eb = 0; iBlurRate = 0; Grayscale(); } As you can notice – sources slightly changed. Now we have 2 canvas objects, so we will using ‘getImageData’ function only with first canvas with original image. Also I added new functions for Blur effect. So how I Blurring image? – commonly, I passing through all pixels of image, taking 8 surrounding pixels of each pixels, calculating average opacity, red/green/blue values of these 8 pixels, and then, apply that average calculated value to our enum pixel. Live Demo Conclusion Not bad, isn’t it? Today we added new interesting effect to our application – Blur. I hope that in coming articles we will continue lessons for image processing. I will be glad to see your thanks and comments. Good luck! SIMILAR ARTICLES 2 COMMENTS Leave a Reply
__label__pos
0.994707
Tell me more × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. Suppose I have a list of tuples and I want to convert to multiple lists. For example, the list of tuples is [(1,2),(3,4),(5,6),] Is there any built-in function in Python that convert it to: [1,3,5],[2,4,6] This can be a simple program. But I am just curious about the existence of such built-in function in Python. share|improve this question 3 Answers up vote 8 down vote accepted The built-in function zip() will almost do what you want: >>> zip(*[(1, 2), (3, 4), (5, 6)]) [(1, 3, 5), (2, 4, 6)] The only difference is that you get tuples instead of lists. You can convert them to lists using map(list, zip(*[(1, 2), (3, 4), (5, 6)])) share|improve this answer From the python docs: zip() in conjunction with the * operator can be used to unzip a list: Specific example: >>> zip((1,3,5),(2,4,6)) [(1, 2), (3, 4), (5, 6)] >>> zip(*[(1, 2), (3, 4), (5, 6)]) [(1, 3, 5), (2, 4, 6)] Or, if you really want lists: >>> map(list, zip(*[(1, 2), (3, 4), (5, 6)])) [[1, 3, 5], [2, 4, 6]] share|improve this answer Use: a = [(1,2),(3,4),(5,6),] b = zip(*a) >>> [(1, 3, 5), (2, 4, 6)] share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.646339
New git HEAD version [libdai.git] / include / dai / weightedgraph.h 1 /* This file is part of libDAI - http://www.libdai.org/ 2 * 3 * Copyright (c) 2006-2011, The libDAI authors. All rights reserved. 4 * 5 * Use of this source code is governed by a BSD-style license that can be found in the LICENSE file. 6 */ 7 8 9 /** \file 10 * \brief Defines some utility functions for (weighted) undirected graphs, trees and rooted trees. 11 * \todo Improve general support for graphs and trees (in particular, a good tree implementation is needed). 12 */ 13 14 15 #ifndef __defined_libdai_weightedgraph_h 16 #define __defined_libdai_weightedgraph_h 17 18 19 #include <vector> 20 #include <map> 21 #include <iostream> 22 #include <set> 23 #include <limits> 24 #include <climits> // Work-around for bug in boost graph library 25 26 #include <boost/graph/adjacency_list.hpp> 27 #include <boost/graph/prim_minimum_spanning_tree.hpp> 28 #include <boost/graph/kruskal_min_spanning_tree.hpp> 29 30 #include <dai/util.h> 31 #include <dai/exceptions.h> 32 #include <dai/graph.h> 33 34 35 namespace dai { 36 37 38 /// Represents a directed edge 39 class DEdge { 40 public: 41 /// First node index (source of edge) 42 size_t first; 43 44 /// Second node index (target of edge) 45 size_t second; 46 47 /// Default constructor 48 DEdge() : first(0), second(0) {} 49 50 /// Constructs a directed edge pointing from \a m1 to \a m2 51 DEdge( size_t m1, size_t m2 ) : first(m1), second(m2) {} 52 53 /// Tests for equality 54 bool operator==( const DEdge &x ) const { return ((first == x.first) && (second == x.second)); } 55 56 /// Smaller-than operator (performs lexicographical comparison) 57 bool operator<( const DEdge &x ) const { 58 return( (first < x.first) || ((first == x.first) && (second < x.second)) ); 59 } 60 61 /// Writes a directed edge to an output stream 62 friend std::ostream & operator << (std::ostream & os, const DEdge & e) { 63 os << "(" << e.first << "->" << e.second << ")"; 64 return os; 65 } 66 67 /// Formats a directed edge as a string 68 std::string toString() const { 69 std::stringstream ss; 70 ss << *this; 71 return ss.str(); 72 } 73 }; 74 75 76 /// Represents an undirected edge 77 class UEdge { 78 public: 79 /// First node index 80 size_t first; 81 82 /// Second node index 83 size_t second; 84 85 /// Default constructor 86 UEdge() : first(0), second(0) {} 87 88 /// Constructs an undirected edge between \a m1 and \a m2 89 UEdge( size_t m1, size_t m2 ) : first(m1), second(m2) {} 90 91 /// Construct from DEdge 92 UEdge( const DEdge &e ) : first(e.first), second(e.second) {} 93 94 /// Tests for inequality (disregarding the ordering of the nodes) 95 bool operator==( const UEdge &x ) { 96 return ((first == x.first) && (second == x.second)) || ((first == x.second) && (second == x.first)); 97 } 98 99 /// Smaller-than operator 100 bool operator<( const UEdge &x ) const { 101 size_t s = std::min( first, second ); 102 size_t l = std::max( first, second ); 103 size_t xs = std::min( x.first, x.second ); 104 size_t xl = std::max( x.first, x.second ); 105 return( (s < xs) || ((s == xs) && (l < xl)) ); 106 } 107 108 /// Writes an undirected edge to an output stream 109 friend std::ostream & operator << (std::ostream & os, const UEdge & e) { 110 if( e.first < e.second ) 111 os << "{" << e.first << "--" << e.second << "}"; 112 else 113 os << "{" << e.second << "--" << e.first << "}"; 114 return os; 115 } 116 117 /// Formats an undirected edge as a string 118 std::string toString() const { 119 std::stringstream ss; 120 ss << *this; 121 return ss.str(); 122 } 123 }; 124 125 126 /// Represents an undirected graph, implemented as a std::set of undirected edges 127 class GraphEL : public std::set<UEdge> { 128 public: 129 /// Default constructor 130 GraphEL() {} 131 132 /// Construct from range of objects that can be cast to UEdge 133 template <class InputIterator> 134 GraphEL( InputIterator begin, InputIterator end ) { 135 insert( begin, end ); 136 } 137 138 /// Construct from GraphAL 139 GraphEL( const GraphAL& G ) { 140 for( size_t n1 = 0; n1 < G.nrNodes(); n1++ ) 141 bforeach( const Neighbor n2, G.nb(n1) ) 142 if( n1 < n2 ) 143 insert( UEdge( n1, n2 ) ); 144 } 145 }; 146 147 148 /// Represents an undirected weighted graph, with weights of type \a T, implemented as a std::map mapping undirected edges to weights 149 template<class T> class WeightedGraph : public std::map<UEdge, T> {}; 150 151 152 /// Represents a rooted tree, implemented as a vector of directed edges 153 /** By convention, the edges are stored such that they point away from 154 * the root and such that edges nearer to the root come before edges 155 * farther away from the root. 156 */ 157 class RootedTree : public std::vector<DEdge> { 158 public: 159 /// Default constructor 160 RootedTree() {} 161 162 /// Constructs a rooted tree from a tree and a root 163 /** \pre T has no cycles and contains node \a Root 164 */ 165 RootedTree( const GraphEL &T, size_t Root ); 166 }; 167 168 169 /// Constructs a minimum spanning tree from the (non-negatively) weighted graph \a G. 170 /** \param G Weighted graph that should have non-negative weights. 171 * \param usePrim If true, use Prim's algorithm (complexity O(E log(V))), otherwise, use Kruskal's algorithm (complexity O(E log(E))). 172 * \note Uses implementation from Boost Graph Library. 173 * \note The vertices of \a G must be in the range [0,N) where N is the number of vertices of \a G. 174 */ 175 template<typename T> RootedTree MinSpanningTree( const WeightedGraph<T> &G, bool usePrim ) { 176 RootedTree result; 177 if( G.size() > 0 ) { 178 using namespace boost; 179 using namespace std; 180 typedef adjacency_list< vecS, vecS, undirectedS, no_property, property<edge_weight_t, double> > boostGraph; 181 182 set<size_t> nodes; 183 vector<UEdge> edges; 184 vector<double> weights; 185 edges.reserve( G.size() ); 186 weights.reserve( G.size() ); 187 for( typename WeightedGraph<T>::const_iterator e = G.begin(); e != G.end(); e++ ) { 188 weights.push_back( e->second ); 189 edges.push_back( e->first ); 190 nodes.insert( e->first.first ); 191 nodes.insert( e->first.second ); 192 } 193 194 size_t N = nodes.size(); 195 for( set<size_t>::const_iterator it = nodes.begin(); it != nodes.end(); it++ ) 196 if( *it >= N ) 197 DAI_THROWE(RUNTIME_ERROR,"Vertices must be in range [0..N) where N is the number of vertices."); 198 199 boostGraph g( edges.begin(), edges.end(), weights.begin(), nodes.size() ); 200 size_t root = *(nodes.begin()); 201 GraphEL tree; 202 if( usePrim ) { 203 // Prim's algorithm 204 vector< graph_traits< boostGraph >::vertex_descriptor > p(N); 205 prim_minimum_spanning_tree( g, &(p[0]) ); 206 207 // Store tree edges in result 208 for( size_t i = 0; i != p.size(); i++ ) { 209 if( p[i] != i ) 210 tree.insert( UEdge( p[i], i ) ); 211 } 212 } else { 213 // Kruskal's algorithm 214 vector< graph_traits< boostGraph >::edge_descriptor > t; 215 t.reserve( N - 1 ); 216 kruskal_minimum_spanning_tree( g, std::back_inserter(t) ); 217 218 // Store tree edges in result 219 for( size_t i = 0; i != t.size(); i++ ) { 220 size_t v1 = source( t[i], g ); 221 size_t v2 = target( t[i], g ); 222 if( v1 != v2 ) 223 tree.insert( UEdge( v1, v2 ) ); 224 } 225 } 226 227 // Direct edges in order to obtain a rooted tree 228 result = RootedTree( tree, root ); 229 } 230 return result; 231 } 232 233 234 /// Constructs a minimum spanning tree from the (non-negatively) weighted graph \a G. 235 /** \param G Weighted graph that should have non-negative weights. 236 * \param usePrim If true, use Prim's algorithm (complexity O(E log(V))), otherwise, use Kruskal's algorithm (complexity O(E log(E))). 237 * \note Uses implementation from Boost Graph Library. 238 * \note The vertices of \a G must be in the range [0,N) where N is the number of vertices of \a G. 239 */ 240 template<typename T> RootedTree MaxSpanningTree( const WeightedGraph<T> &G, bool usePrim ) { 241 if( G.size() == 0 ) 242 return RootedTree(); 243 else { 244 T maxweight = G.begin()->second; 245 for( typename WeightedGraph<T>::const_iterator it = G.begin(); it != G.end(); it++ ) 246 if( it->second > maxweight ) 247 maxweight = it->second; 248 // make a copy of the graph 249 WeightedGraph<T> gr( G ); 250 // invoke MinSpanningTree with negative weights 251 // (which have to be shifted to satisfy positivity criterion) 252 for( typename WeightedGraph<T>::iterator it = gr.begin(); it != gr.end(); it++ ) 253 it->second = maxweight - it->second; 254 return MinSpanningTree( gr, usePrim ); 255 } 256 } 257 258 259 } // end of namespace dai 260 261 262 #endif
__label__pos
0.973639
Answers Solutions by everydaycalculation.com Answers.everydaycalculation.com » A is what percent of B 6 is what percent of 1020? 6 of 1020 is 0.59% Steps to solve "what percent is 6 of 1020?" 1. 6 of 1020 can be written as: 6/1020 2. To find percentage, we need to find an equivalent fraction with denominator 100. Multiply both numerator & denominator by 100 6/1020 × 100/100 3. = (6 × 100/1020) × 1/100 = 0.59/100 4. Therefore, the answer is 0.59% If you are using a calculator, simply enter 6÷1020×100 which will give you 0.59 as the answer. MathStep (Works offline) Download our mobile app and learn how to work with percentages in your own time: Android and iPhone/ iPad More percentage problems: Find another What percent is of © everydaycalculation.com
__label__pos
0.981848
Home Cloud Computing The Review of Grid Computing Vs Cloud Computing The Review of Grid Computing Vs Cloud Computing The grid computing system is a widely distributed resource for a common goal. It is Brother of Cloud Computing and Sister of Supercomputer. We can think the grid is a distributed system connected to a single network. It works with a large volume of files. Basically, it is a cluster types system. So people call it cluster computing. This system tends to be more geographically disperse and heterogeneous by nature. Grid network also has various types. A single grid is like dedicated connection but a common grid perform multiple tasks. The size of the grid is large. So the grid system is like supercomputing. It consists of many network, computer, and middleware. It’s dedicated to some specific function of the large volume of data. In the grid process, each task divided into various process. All the process starts execution simultaneously on a different computer. As a result, very few seconds needs to execute and enjoy the flavor of supercomputing. Types of Grid Computing Computational Grid Computational grid develops to process a huge volume of data. Basically, research organization, government organization, non-government organization use the computational grid. This is a single entity distributed system. But possesses the power of large computational power. The computational grid has some advantage like: • Technology improvement and reduce economical cost • Utilization of idle capacity system • Now problem-solving technique using other than a supercomputer Data Grid Data grid share, store and maintain resource during data processing. All the process performed on a different computer so the process takes less time and offer the advantage of a supercomputer. The internal P2P network is an example of data computing. Collaboration Grid The collaboration of this system promotes operation ability among various department and coordinates together to works simultaneously. Data sharing is highly controlled to reduce the violation of access in each other domain. The main function of this system that this collaboration grid can process more than one process simultaneously. Utility Grid This computing works as a service as per requirement. This grid has the power of all the three grid discussed above. Enterprise grid Enterprise grid is a big enterprise develops to coordinate its subsidiaries. This enterprise grid developed by its own venture. It may have one or more data center among the systems. Basically, the network is for better security and privacy. Grid Computing Architecture Diagram with Explanation What's Grid Computing and It's Relation with Cloud Computer and Supercomputer Grid computing architecture diagram with explanation Example of Grid Computing This system takes the challenges of financial modeling, earthquake simulation, protein folding, and climate/weather modeling. Some examples are: • Applied by National Science Foundation’s National Technology Grid • NASA’s Information Power Grid • Pratt & Whitney • Bristol-Myers Squibb Co. • American Express • Berkeley Open Infrastructure for Network Computing (BOINC) • BEinGRID (Business Experiments in Grid) of the European Commission • Enabling Grids for E-sciencE • NASA Advanced Supercomputing facility (NAS) project • United devises Cancer Research Project based on its Grid MP product Grid Computing Software There are lots of software of grid computing. Some of them are: Grid Computing Info Centre (GRID Infoware) is the authorized website for different grid computing software. You can have a review regarding its software. Why Brother of Cloud Computing? Cloud computing is a myth of computing. The main target of cloud computing is to provide the support of infrastructure as a service. Grid computing has the same function. Cloud computing is the elder brother. So cloud computing has more two options. Cloud computing also offer platform as a service and software as a service. Grid system performs the task by allocation the memory of a different computer of the grid. Cloud computing perform the same task by its redundant storage and application on the cloud. In this process, both have some limitations. In cloud computing internet is a must. Sometimes it needs a high-speed internet. On the other hand, grid computing uses a local area network. So data cannot access from anywhere. How Sister of Supercomputer? It is also a funny dialogue. Actually, the function of supercomputer and grid computing is the same. The intention of a supercomputer is to perform fast. On the other hand, grid computing has the same purpose. After getting any task grid computing to divide the task and allocate different computer of LAN. A single PC works a single task. So it takes less time to process big data. As a result, the user gets the flavor of a supercomputer. Cloud Computing Vs Grid Computing Basic Difference Cloud computing supported by the remote server. It hosted over the internet to manage data. Cloud helps the user to provide on-demand service. It has an integrated hardware and software network. On the other hand, grid computing is a distributed system. It connects all LAN, MAN, and CAN network. The grid does not require internet. Difference type Cloud computing has segregation into the private cloud, public cloud, hybrid cloud. It has three models like SaaS, PaaS, and IaaS. On the other hand, grid computer has Distributed Computing systems, Distributed Pervasive Systems and Distributed Information systems. Goals The main goal of cloud computing is to reduce cost. The secondary goal of the cloud is to make data available everywhere. On the other hand, the Grid system focuses on accuracy and speed. It divides its task to the connected computer. Cost People use cloud computing as a low-cost strategy. Cloud computing service providers offer various service. The costing procedure is paid as you go. Both hardware and software are available in cloud computing. On the other hand in the grid system, it takes numerous computer. Both hardware and software are costly for the grid system. Management System Cloud computing is centrally managed by the third party. On the other hand grid systems is a distributed decentralized system. So management systems of the grid system is a little bit difficult. Final Thought Cloud computing and grid seem the same but there is a lot of difference. The first on is internet based but the second one is LAN based. However, both have a different flavor of computing. So the organization focus on both cloud computing and grid computing. Hawlader I am also a freelance blogger and real worm of Apps. I love to experiments various apps and games on my android and iOS platform. So here I want to share my cumulative experience and findings regarding various types of apps and games. I am optimistic that this apps review will help the online reader to find the best apps and games for the particular OS. Latest Post The 30 Best Online YouTube to MP3 Converter For Free We all love to listen to music. It calms our mind. It can make us laugh or even cry.... The 30 Best Car Games for Android for Driving Simulation Car games are top-rated nowadays. In this modern era of technology, you can enjoy driving by playing car games... Best 30 TV Apps for Android to Bring Relax in Life in 2020 TV, or as we all know, the idiot box is one of the most demanding products of any nation.... The 30 Best Video Compressor Free Without Losing Quality Videos are an essential part of today’s digital age. The appeal of the auditory and visual nature of videos... Trending Now Play PUBG On PC Easily And Free With The Best Emulator PUBG is one of the most popular games which could spread it’s popularity wings over the whole world. We... How To Use Cloud Encryption Algorithm For Data Encryption In The Cloud Cloud Encryption Algorithm is a component of data transport security. This Algorithm is nothing but some sequence of mathematics.... Best 22 Photo Recovery Software to Retrieve the Deleted Images Suppose you are a professional photographer or you love to archive digital photos. Everything is going smoothly, but suddenly... Vendor Lock-in Cloud Computing: 10 Tips to Avoid Vendor Lock-in Vendor Lock-in Cloud Computing is a critical issue for cloud computing. This vendor lock-in is a secret strategy for cloud... Editors' Pick
__label__pos
0.658519
Are playlists used for the recommendation engine? are playlists accounted for in the recommendation engine? like if several users have added the same two songs to the same playlist, they’d be marked as more similar to each other than to a third otherwise equivalent song if not, I feel like playlist data would be quite valuable 2 Likes Not yet. There aren’t enough playlists to make this worth yet, but eventually there will be enough and we will use them. 3 Likes that’s fair, glad to know that it’s on the radar~ 2 Likes
__label__pos
0.993319
Baza znanja 1. Home 2. How To 3. How To setup the KeePass credential provider Kreiran Promijenjen Ištampaj artikal Artikal 2648 How To setup the KeePass credential provider Initial Setup 1. Download and install KeePass 2 (Professional Edition) if you have not done so already. 2. Download and install the KeePassRest Plugin for KeePass. 3. Ensure that Microsoft .NET Framework 4.6.2 is installed. 4. Start KeePass and verify that the KeePassRest Plugin has been loaded. Go to the menu: Tools->Plugins 5. Open your kdbx database in KeePass 6. Start SmartFTP 7. In SmartFTP go to the main menu: File - Settings. Go to the KeePass settings dialog 8. Select the client certificate you want to use for authorization 9. Click the Test button 10. KeePass is showing an authentication prompt. Click on Yes to allow the client to access the KeePass store. 11. The test in SmartFTP will now successfully complete Setup Favorite to use KeePass store 1. Go to the Favorite Properties (Tab: Tools, click on Favorite Properties or in the quick connect dialog: File - Connection) 2. In the General dialog, select KeePass for the Provider 3. Click OK to save the changes 4. Connect to the Favorite 5. Enter the credentials (username, password, etc). 6. The credentials are now stored in KeePass and will be retrieved from there for the next connection The KeePass feature in SmartFTP requires the Ultimate or Enterprise Edition . Kljične riječi setup keepass keepassrest credential Povezani artikli What do you think about this topic? Send feedback!
__label__pos
0.622102
Challenges Live Reloading for a Production Next.js app Add Live Reloading Support Now we are going to add the live reload functionality. For that, we need an additional NPM package. Install that with: yarn add swr Now create a file called lib/use-live-reload.js and add this content. import useSWR from 'swr' import {useRouter} from 'next/router' import { useEffect, useState } from 'react' async function textFetcher(path) { const res = await fetch(path) return res.text() } export default function useLiveReload() { const router = useRouter() const [prevData, setPrevData] = useState(null) const {data} = useSWR(router.asPath, textFetcher, { refreshInterval: 1000, }) useEffect(() => { if (!data) { return } if (prevData && prevData !== data) { location.reload() return } setPrevData(data) }) return {} } Then import this file on the pages/index.js with: import useLiveReload from '../lib/use-live-reload' Finally, invoke the useLiveReload React hook inside the Index page. The final version of pages/index.js will look like this: Now, let's do the same experiment again: • Kill the already running app • Run your app with yarn build && yarn start • Visit http://localhost:3005 using a new browser tab • Then open the data/content.md file • Change the content • Check the content of http://localhost:3005 for 5 seconds • Reload the page twice Q: How do you describe the result? authenticating... It's time for another question. Q: Why did we use SWR React hook and what it does? authenticating...
__label__pos
0.982063
monotone monotone Mtn Source Tree Root/paths.cc 1// copyright (C) 2005 nathaniel smith <[email protected]> 2// all rights reserved. 3// licensed to the public under the terms of the GNU GPL (>= 2) 4// see the file COPYING for details 5 6#include <iostream> 7#include <string> 8 9#include <boost/filesystem/path.hpp> 10#include <boost/filesystem/operations.hpp> 11#include <boost/filesystem/convenience.hpp> 12 13#include "constants.hh" 14#include "paths.hh" 15#include "platform.hh" 16#include "sanity.hh" 17#include "interner.hh" 18#include "transforms.hh" 19 20// some structure to ensure we aren't doing anything broken when resolving 21// filenames. the idea is to make sure 22// -- we don't depend on the existence of something before it has been set 23// -- we don't re-set something that has already been used 24template <typename T> 25struct access_tracker 26{ 27 void set(T const & val, bool may_be_initialized) 28 { 29 I(may_be_initialized || !initialized); 30 I(!used); 31 initialized = true; 32 value = val; 33 } 34 T const & get() 35 { 36 I(initialized); 37 used = true; 38 return value; 39 } 40 T const & get_but_unused() 41 { 42 I(initialized); 43 return value; 44 } 45 // for unit tests 46 void unset() 47 { 48 used = initialized = false; 49 } 50 T value; 51 bool initialized, used; 52 access_tracker() : initialized(false), used(false) {}; 53}; 54 55// paths to use in interpreting paths from various sources, 56// conceptually: 57// working_root / initial_rel_path == initial_abs_path 58 59// initial_abs_path is for interpreting relative system_path's 60static access_tracker<system_path> initial_abs_path; 61// initial_rel_path is for interpreting external file_path's 62static access_tracker<file_path> initial_rel_path; 63// working_root is for converting file_path's and bookkeeping_path's to 64// system_path's. 65static access_tracker<system_path> working_root; 66 67bookkeeping_path const bookkeeping_root("MT"); 68 69void 70save_initial_path() 71{ 72 // FIXME: BUG: this only works if the current working dir is in utf8 73 initial_abs_path.set(system_path(get_current_working_dir()), false); 74 // We still use boost::fs, so let's continue to initialize it properly. 75 fs::initial_path(); 76 fs::path::default_name_check(fs::native); 77 L(F("initial abs path is: %s") % initial_abs_path.get_but_unused()); 78} 79 80/////////////////////////////////////////////////////////////////////////// 81// verifying that internal paths are indeed normalized. 82// this code must be superfast 83/////////////////////////////////////////////////////////////////////////// 84 85// normalized means: 86// -- / as path separator 87// -- not an absolute path (on either posix or win32) 88// operationally, this means: first character != '/', first character != '\', 89// second character != ':' 90// -- no illegal characters 91// -- 0x00 -- 0x1f, 0x7f, \ are the illegal characters. \ is illegal 92// unconditionally to prevent people checking in files on posix that 93// have a different interpretation on win32 94// -- (may want to allow 0x0a and 0x0d (LF and CR) in the future, but this 95// is blocked on manifest format changing) 96// (also requires changes to 'automate inventory', possibly others, to 97// handle quoting) 98// -- no doubled /'s 99// -- no trailing / 100// -- no "." or ".." path components 101static inline bool 102bad_component(std::string const & component) 103{ 104 if (component == "") 105 return true; 106 if (component == ".") 107 return true; 108 if (component == "..") 109 return true; 110 return false; 111} 112 113static inline bool 114fully_normalized_path(std::string const & path) 115{ 116 // FIXME: probably should make this a 256-byte static lookup table 117 const static std::string bad_chars = std::string("\\") + constants::illegal_path_bytes + std::string(1, '\0'); 118 119 // empty path is fine 120 if (path.empty()) 121 return true; 122 // could use is_absolute_somewhere, but this is the only part of it that 123 // wouldn't be redundant 124 if (path.size() > 1 && path[1] == ':') 125 return false; 126 // first scan for completely illegal bytes 127 if (path.find_first_of(bad_chars) != std::string::npos) 128 return false; 129 // now check each component 130 std::string::size_type start, stop; 131 start = 0; 132 while (1) 133 { 134 stop = path.find('/', start); 135 if (stop == std::string::npos) 136 { 137 if (bad_component(path.substr(start))) 138 return false; 139 break; 140 } 141 if (bad_component(path.substr(start, stop - start))) 142 return false; 143 start = stop + 1; 144 } 145 return true; 146} 147 148static inline bool 149in_bookkeeping_dir(std::string const & path) 150{ 151 return path == "MT" || (path.size() >= 3 && (path.substr(0, 3) == "MT/")); 152} 153 154static inline bool 155is_valid_internal(std::string const & path) 156{ 157 return (fully_normalized_path(path) 158 && !in_bookkeeping_dir(path)); 159} 160 161file_path::file_path(file_path::source_type type, std::string const & path) 162{ 163 switch (type) 164 { 165 case internal: 166 data = path; 167 break; 168 case internal_from_user: 169 data = path; 170 N(is_valid_internal(path), 171 F("path '%s' is invalid") % path); 172 break; 173 case external: 174 N(!path.empty(), F("empty path '%s' is invalid") % path); 175 fs::path out, base, relative; 176 try 177 { 178 base = fs::path(initial_rel_path.get().as_internal()); 179 // the fs::native is needed to get it to accept paths like ".foo". 180 relative = fs::path(path, fs::native); 181 out = (base / relative).normalize(); 182 } 183 catch (std::exception & e) 184 { 185 N(false, F("path '%s' is invalid") % path); 186 } 187 data = utf8(out.string()); 188 if (data() == ".") 189 data = std::string(""); 190 N(!relative.has_root_path(), 191 F("absolute path '%s' is invalid") % relative.string()); 192 N(fully_normalized_path(data()), F("path '%s' is invalid") % data); 193 N(!in_bookkeeping_dir(data()), F("path '%s' is in bookkeeping dir") % data); 194 break; 195 } 196 I(is_valid_internal(data())); 197} 198 199bookkeeping_path::bookkeeping_path(std::string const & path) 200{ 201 I(fully_normalized_path(path)); 202 I(in_bookkeeping_dir(path)); 203 data = path; 204} 205 206bool 207bookkeeping_path::is_bookkeeping_path(std::string const & path) 208{ 209 return in_bookkeeping_dir(path); 210} 211 212/////////////////////////////////////////////////////////////////////////// 213// splitting/joining 214// this code must be superfast 215// it depends very much on knowing that it can only be applied to fully 216// normalized, relative, paths. 217/////////////////////////////////////////////////////////////////////////// 218 219static interner<path_component> pc_interner("", the_null_component); 220 221// This function takes a vector of path components and joins them into a 222// single file_path. Valid input may be a single-element vector whose sole 223// element is the empty path component (""); this represents the null path, 224// which we use to represent non-existent files. Alternatively, input may be 225// a multi-element vector, in which case all elements of the vector are 226// required to be non-null. The following are valid inputs (with strings 227// replaced by their interned version, of course): 228// - [""] 229// - ["foo"] 230// - ["foo", "bar"] 231// The following are not: 232// - [] 233// - ["foo", ""] 234// - ["", "bar"] 235file_path::file_path(split_path const & sp) 236{ 237 split_path::const_iterator i = sp.begin(); 238 I(i != sp.end()); 239 if (sp.size() > 1) 240 I(!null_name(*i)); 241 std::string tmp = pc_interner.lookup(*i); 242 I(tmp != bookkeeping_root.as_internal()); 243 for (++i; i != sp.end(); ++i) 244 { 245 I(!null_name(*i)); 246 tmp += "/"; 247 tmp += pc_interner.lookup(*i); 248 } 249 data = tmp; 250} 251 252// 253// this takes a path of the form 254// 255// "p[0]/p[1]/.../p[n-1]/p[n]" 256// 257// and fills in a vector of paths corresponding to p[0] ... p[n-1] 258// 259// FIXME: this code carefully duplicates the behavior of the old path 260// splitting functions, in that it is _not_ the inverse to the above joining 261// function. The difference is that if you pass a null path (the one 262// represented by the empty string, or 'file_path()'), then it will return an 263// _empty_ vector. This vector will not be suitable to pass to the above path 264// joiner; to get the null path back again, you have to pass the above 265// function a single-element vector containing a the null component. 266// 267// Why does it work this way? Because that's what the old code did, and 268// that's what change_set.cc was written around; and it's much much easier to 269// make this code do something weird than to try and fix change_set.cc. When 270// change_set.cc is rewritten, however, you should revisit the semantics of 271// this function. 272void 273file_path::split(split_path & sp) const 274{ 275 sp.clear(); 276 if (empty()) 277 return; 278 std::string::size_type start, stop; 279 start = 0; 280 std::string const & s = data(); 281 while (1) 282 { 283 stop = s.find('/', start); 284 if (stop < 0 || stop > s.length()) 285 { 286 sp.push_back(pc_interner.intern(s.substr(start))); 287 break; 288 } 289 sp.push_back(pc_interner.intern(s.substr(start, stop - start))); 290 start = stop + 1; 291 } 292} 293 294/////////////////////////////////////////////////////////////////////////// 295// localizing file names (externalizing them) 296// this code must be superfast when there is no conversion needed 297/////////////////////////////////////////////////////////////////////////// 298 299std::string 300any_path::as_external() const 301{ 302#ifdef __APPLE__ 303 // on OS X paths for the filesystem/kernel are UTF-8 encoded, regardless of 304 // locale. 305 return data(); 306#else 307 // on normal systems we actually have some work to do, alas. 308 // not much, though, because utf8_to_system does all the hard work. it is 309 // carefully optimized. do not screw it up. 310 external out; 311 utf8_to_system(data, out); 312 return out(); 313#endif 314} 315 316/////////////////////////////////////////////////////////////////////////// 317// writing out paths 318/////////////////////////////////////////////////////////////////////////// 319 320std::ostream & 321operator <<(std::ostream & o, any_path const & a) 322{ 323 o << a.as_internal(); 324 return o; 325} 326 327/////////////////////////////////////////////////////////////////////////// 328// path manipulation 329// this code's speed does not matter much 330/////////////////////////////////////////////////////////////////////////// 331 332static bool 333is_absolute_here(std::string const & path) 334{ 335 if (path.empty()) 336 return false; 337 if (path[0] == '/') 338 return true; 339#ifdef WIN32 340 if (path[0] == '\\') 341 return true; 342 if (path.size() > 1 && path[1] == ':') 343 return true; 344#endif 345 return false; 346} 347 348static inline bool 349is_absolute_somewhere(std::string const & path) 350{ 351 if (path.empty()) 352 return false; 353 if (path[0] == '/') 354 return true; 355 if (path[0] == '\\') 356 return true; 357 if (path.size() > 1 && path[1] == ':') 358 return true; 359 return false; 360} 361 362file_path 363file_path::operator /(std::string const & to_append) const 364{ 365 I(!is_absolute_somewhere(to_append)); 366 if (empty()) 367 return file_path_internal(to_append); 368 else 369 return file_path_internal(data() + "/" + to_append); 370} 371 372bookkeeping_path 373bookkeeping_path::operator /(std::string const & to_append) const 374{ 375 I(!is_absolute_somewhere(to_append)); 376 I(!empty()); 377 return bookkeeping_path(data() + "/" + to_append); 378} 379 380system_path 381system_path::operator /(std::string const & to_append) const 382{ 383 I(!empty()); 384 I(!is_absolute_here(to_append)); 385 return system_path(data() + "/" + to_append); 386} 387 388/////////////////////////////////////////////////////////////////////////// 389// system_path 390/////////////////////////////////////////////////////////////////////////// 391 392static std::string 393normalize_out_dots(std::string const & path) 394{ 395#ifdef WIN32 396 return fs::path(path, fs::native).normalize().string(); 397#else 398 return fs::path(path, fs::native).normalize().native_file_string(); 399#endif 400} 401 402system_path::system_path(any_path const & other, bool in_true_working_copy) 403{ 404 I(!is_absolute_here(other.as_internal())); 405 system_path wr; 406 if (in_true_working_copy) 407 wr = working_root.get(); 408 else 409 wr = working_root.get_but_unused(); 410 data = normalize_out_dots((wr / other.as_internal()).as_internal()); 411} 412 413static inline std::string const_system_path(utf8 const & path) 414{ 415 N(!path().empty(), F("invalid path ''")); 416 std::string expanded = tilde_expand(path)(); 417 if (is_absolute_here(expanded)) 418 return normalize_out_dots(expanded); 419 else 420 return normalize_out_dots((initial_abs_path.get() / expanded).as_internal()); 421} 422 423system_path::system_path(std::string const & path) 424{ 425 data = const_system_path(path); 426} 427 428system_path::system_path(utf8 const & path) 429{ 430 data = const_system_path(path); 431} 432 433/////////////////////////////////////////////////////////////////////////// 434// working copy (and path roots) handling 435/////////////////////////////////////////////////////////////////////////// 436 437bool 438find_and_go_to_working_copy(system_path const & search_root) 439{ 440 // unimplemented 441 fs::path root(search_root.as_external(), fs::native); 442 fs::path bookdir(bookkeeping_root.as_external(), fs::native); 443 fs::path current(fs::initial_path()); 444 fs::path removed; 445 fs::path check = current / bookdir; 446 447 L(F("searching for '%s' directory with root '%s'\n") 448 % bookdir.string() 449 % root.string()); 450 451 while (current != root 452 && current.has_branch_path() 453 && current.has_leaf() 454 && !fs::exists(check)) 455 { 456 L(F("'%s' not found in '%s' with '%s' removed\n") 457 % bookdir.string() % current.string() % removed.string()); 458 removed = fs::path(current.leaf(), fs::native) / removed; 459 current = current.branch_path(); 460 check = current / bookdir; 461 } 462 463 L(F("search for '%s' ended at '%s' with '%s' removed\n") 464 % bookdir.string() % current.string() % removed.string()); 465 466 if (!fs::exists(check)) 467 { 468 L(F("'%s' does not exist\n") % check.string()); 469 return false; 470 } 471 472 if (!fs::is_directory(check)) 473 { 474 L(F("'%s' is not a directory\n") % check.string()); 475 return false; 476 } 477 478 // check for MT/. and MT/.. to see if mt dir is readable 479 if (!fs::exists(check / ".") || !fs::exists(check / "..")) 480 { 481 L(F("problems with '%s' (missing '.' or '..')\n") % check.string()); 482 return false; 483 } 484 485 working_root.set(current.native_file_string(), true); 486 initial_rel_path.set(file_path_internal(removed.string()), true); 487 488 L(F("working root is '%s'") % working_root.get_but_unused()); 489 L(F("initial relative path is '%s'") % initial_rel_path.get_but_unused()); 490 491 change_current_working_dir(working_root.get_but_unused()); 492 493 return true; 494} 495 496void 497go_to_working_copy(system_path const & new_working_copy) 498{ 499 working_root.set(new_working_copy, true); 500 initial_rel_path.set(file_path(), true); 501 change_current_working_dir(new_working_copy); 502} 503 504/////////////////////////////////////////////////////////////////////////// 505// tests 506/////////////////////////////////////////////////////////////////////////// 507 508#ifdef BUILD_UNIT_TESTS 509#include "unit_tests.hh" 510 511static void test_null_name() 512{ 513 BOOST_CHECK(null_name(the_null_component)); 514} 515 516static void test_file_path_internal() 517{ 518 char const * baddies[] = {"/foo", 519 "foo//bar", 520 "foo/../bar", 521 "../bar", 522 "MT/blah", 523 "foo/bar/", 524 "foo/./bar", 525 "./foo", 526 ".", 527 "..", 528 "c:\\foo", 529 "c:foo", 530 "c:/foo", 531 0 }; 532 initial_rel_path.unset(); 533 initial_rel_path.set(file_path(), true); 534 for (char const ** c = baddies; *c; ++c) 535 { 536 BOOST_CHECK_THROW(file_path_internal(*c), std::logic_error); 537 BOOST_CHECK_THROW(file_path_internal_from_user(std::string(*c)), 538 informative_failure); 539 } 540 initial_rel_path.unset(); 541 initial_rel_path.set(file_path_internal("blah/blah/blah"), true); 542 for (char const ** c = baddies; *c; ++c) 543 { 544 BOOST_CHECK_THROW(file_path_internal(*c), std::logic_error); 545 BOOST_CHECK_THROW(file_path_internal_from_user(std::string(*c)), 546 informative_failure); 547 } 548 549 BOOST_CHECK(file_path().empty()); 550 BOOST_CHECK(file_path_internal("").empty()); 551 552 char const * goodies[] = {"", 553 "a", 554 "foo", 555 "foo/bar/baz", 556 "foo/bar.baz", 557 "foo/with-hyphen/bar", 558 "foo/with_underscore/bar", 559 "foo/with,other+@weird*%#$=stuff/bar", 560 ".foo/bar", 561 "..foo/bar", 562 "MTfoo/bar", 563#ifndef WIN32 564 "foo:bar", 565#endif 566 0 }; 567 568 for (int i = 0; i < 2; ++i) 569 { 570 initial_rel_path.unset(); 571 initial_rel_path.set(i ? file_path() 572 : file_path_internal("blah/blah/blah"), 573 true); 574 for (char const ** c = goodies; *c; ++c) 575 { 576 file_path fp = file_path_internal(*c); 577 BOOST_CHECK(fp.as_internal() == *c); 578 BOOST_CHECK(file_path_internal(fp.as_internal()) == fp); 579 std::vector<path_component> split_test; 580 fp.split(split_test); 581 if (fp.empty()) 582 BOOST_CHECK(split_test.empty()); 583 else 584 { 585 BOOST_CHECK(!split_test.empty()); 586 file_path fp2(split_test); 587 BOOST_CHECK(fp == fp2); 588 } 589 for (std::vector<path_component>::const_iterator i = split_test.begin(); 590 i != split_test.end(); ++i) 591 BOOST_CHECK(!null_name(*i)); 592 file_path fp_user = file_path_internal_from_user(std::string(*c)); 593 BOOST_CHECK(fp == fp_user); 594 } 595 } 596 597 initial_rel_path.unset(); 598} 599 600static void check_fp_normalizes_to(char * before, char * after) 601{ 602 L(F("check_fp_normalizes_to: '%s' -> '%s'") % before % after); 603 file_path fp = file_path_external(std::string(before)); 604 L(F(" (got: %s)") % fp); 605 BOOST_CHECK(fp.as_internal() == after); 606 BOOST_CHECK(file_path_internal(fp.as_internal()) == fp); 607 // we compare after to the external form too, since as far as we know 608 // relative normalized posix paths are always good win32 paths too 609 BOOST_CHECK(fp.as_external() == after); 610 std::vector<path_component> split_test; 611 fp.split(split_test); 612 if (fp.empty()) 613 BOOST_CHECK(split_test.empty()); 614 else 615 { 616 BOOST_CHECK(!split_test.empty()); 617 file_path fp2(split_test); 618 BOOST_CHECK(fp == fp2); 619 } 620 for (std::vector<path_component>::const_iterator i = split_test.begin(); 621 i != split_test.end(); ++i) 622 BOOST_CHECK(!null_name(*i)); 623} 624 625static void test_file_path_external_no_prefix() 626{ 627 initial_rel_path.unset(); 628 initial_rel_path.set(file_path(), true); 629 630 char const * baddies[] = {"/foo", 631 "../bar", 632 "MT/blah", 633 "MT", 634 "//blah", 635 "\\foo", 636 "..", 637 "c:\\foo", 638 "c:foo", 639 "c:/foo", 640 "", 641 0 }; 642 for (char const ** c = baddies; *c; ++c) 643 { 644 L(F("test_file_path_external_no_prefix: trying baddie: %s") % *c); 645 BOOST_CHECK_THROW(file_path_external(utf8(*c)), informative_failure); 646 } 647 648 check_fp_normalizes_to("a", "a"); 649 check_fp_normalizes_to("foo", "foo"); 650 check_fp_normalizes_to("foo/bar", "foo/bar"); 651 check_fp_normalizes_to("foo/bar/baz", "foo/bar/baz"); 652 check_fp_normalizes_to("foo/bar.baz", "foo/bar.baz"); 653 check_fp_normalizes_to("foo/with-hyphen/bar", "foo/with-hyphen/bar"); 654 check_fp_normalizes_to("foo/with_underscore/bar", "foo/with_underscore/bar"); 655 check_fp_normalizes_to(".foo/bar", ".foo/bar"); 656 check_fp_normalizes_to("..foo/bar", "..foo/bar"); 657 check_fp_normalizes_to(".", ""); 658#ifndef WIN32 659 check_fp_normalizes_to("foo:bar", "foo:bar"); 660#endif 661 check_fp_normalizes_to("foo/with,other+@weird*%#$=stuff/bar", 662 "foo/with,other+@weird*%#$=stuff/bar"); 663 664 // Why are these tests with // in them commented out? because boost::fs 665 // sucks and can't normalize them. FIXME. 666 //check_fp_normalizes_to("foo//bar", "foo/bar"); 667 check_fp_normalizes_to("foo/../bar", "bar"); 668 check_fp_normalizes_to("foo/bar/", "foo/bar"); 669 check_fp_normalizes_to("foo/./bar/", "foo/bar"); 670 check_fp_normalizes_to("./foo", "foo"); 671 //check_fp_normalizes_to("foo///.//", "foo"); 672 673 initial_rel_path.unset(); 674} 675 676static void test_file_path_external_prefix_a_b() 677{ 678 initial_rel_path.unset(); 679 initial_rel_path.set(file_path_internal("a/b"), true); 680 681 char const * baddies[] = {"/foo", 682 "../../../bar", 683 "../../..", 684 "//blah", 685 "\\foo", 686 "c:\\foo", 687#ifdef WIN32 688 "c:foo", 689 "c:/foo", 690#endif 691 "", 692 0 }; 693 for (char const ** c = baddies; *c; ++c) 694 { 695 L(F("test_file_path_external_prefix_a_b: trying baddie: %s") % *c); 696 BOOST_CHECK_THROW(file_path_external(utf8(*c)), informative_failure); 697 } 698 699 check_fp_normalizes_to("foo", "a/b/foo"); 700 check_fp_normalizes_to("a", "a/b/a"); 701 check_fp_normalizes_to("foo/bar", "a/b/foo/bar"); 702 check_fp_normalizes_to("foo/bar/baz", "a/b/foo/bar/baz"); 703 check_fp_normalizes_to("foo/bar.baz", "a/b/foo/bar.baz"); 704 check_fp_normalizes_to("foo/with-hyphen/bar", "a/b/foo/with-hyphen/bar"); 705 check_fp_normalizes_to("foo/with_underscore/bar", "a/b/foo/with_underscore/bar"); 706 check_fp_normalizes_to(".foo/bar", "a/b/.foo/bar"); 707 check_fp_normalizes_to("..foo/bar", "a/b/..foo/bar"); 708 check_fp_normalizes_to(".", "a/b"); 709#ifndef WIN32 710 check_fp_normalizes_to("foo:bar", "a/b/foo:bar"); 711#endif 712 check_fp_normalizes_to("foo/with,other+@weird*%#$=stuff/bar", 713 "a/b/foo/with,other+@weird*%#$=stuff/bar"); 714 // why are the tests with // in them commented out? because boost::fs sucks 715 // and can't normalize them. FIXME. 716 //check_fp_normalizes_to("foo//bar", "a/b/foo/bar"); 717 check_fp_normalizes_to("foo/../bar", "a/b/bar"); 718 check_fp_normalizes_to("foo/bar/", "a/b/foo/bar"); 719 check_fp_normalizes_to("foo/./bar/", "a/b/foo/bar"); 720 check_fp_normalizes_to("./foo", "a/b/foo"); 721 //check_fp_normalizes_to("foo///.//", "a/b/foo"); 722 // things that would have been bad without the initial_rel_path: 723 check_fp_normalizes_to("../foo", "a/foo"); 724 check_fp_normalizes_to("..", "a"); 725 check_fp_normalizes_to("../..", ""); 726 check_fp_normalizes_to("MT/foo", "a/b/MT/foo"); 727 check_fp_normalizes_to("MT", "a/b/MT"); 728#ifndef WIN32 729 check_fp_normalizes_to("c:foo", "a/b/c:foo"); 730 check_fp_normalizes_to("c:/foo", "a/b/c:/foo"); 731#endif 732 733 initial_rel_path.unset(); 734} 735 736static void test_split_join() 737{ 738 file_path fp1 = file_path_internal("foo/bar/baz"); 739 file_path fp2 = file_path_internal("bar/baz/foo"); 740 typedef std::vector<path_component> pcv; 741 pcv split1, split2; 742 fp1.split(split1); 743 fp2.split(split2); 744 BOOST_CHECK(fp1 == file_path(split1)); 745 BOOST_CHECK(fp2 == file_path(split2)); 746 BOOST_CHECK(!(fp1 == file_path(split2))); 747 BOOST_CHECK(!(fp2 == file_path(split1))); 748 BOOST_CHECK(split1.size() == 3); 749 BOOST_CHECK(split2.size() == 3); 750 BOOST_CHECK(split1[0] != split1[1]); 751 BOOST_CHECK(split1[0] != split1[2]); 752 BOOST_CHECK(split1[1] != split1[2]); 753 BOOST_CHECK(!null_name(split1[0]) 754 && !null_name(split1[1]) 755 && !null_name(split1[2])); 756 BOOST_CHECK(split1[0] == split2[2]); 757 BOOST_CHECK(split1[1] == split2[0]); 758 BOOST_CHECK(split1[2] == split2[1]); 759 760 file_path fp3 = file_path_internal(""); 761 pcv split3; 762 fp3.split(split3); 763 BOOST_CHECK(split3.empty()); 764 765 pcv split4; 766 // this comparison tricks the compiler into not completely eliminating this 767 // code as dead... 768 BOOST_CHECK_THROW(file_path(split4) == file_path(), std::logic_error); 769 split4.push_back(the_null_component); 770 BOOST_CHECK(file_path(split4) == file_path()); 771 split4.clear(); 772 split4.push_back(split1[0]); 773 split4.push_back(the_null_component); 774 split4.push_back(split1[0]); 775 // this comparison tricks the compiler into not completely eliminating this 776 // code as dead... 777 BOOST_CHECK_THROW(file_path(split4) == file_path(), std::logic_error); 778 779 // Make sure that we can't use joining to create a path into the bookkeeping 780 // dir 781 pcv split_mt1, split_mt2; 782 file_path_internal("foo/MT").split(split_mt1); 783 BOOST_CHECK(split_mt1.size() == 2); 784 split_mt2.push_back(split_mt1[1]); 785 // split_mt2 now contains the component "MT" 786 BOOST_CHECK_THROW(file_path(split_mt2) == file_path(), std::logic_error); 787 split_mt2.push_back(split_mt1[0]); 788 // split_mt2 now contains the components "MT", "foo" in that order 789 // this comparison tricks the compiler into not completely eliminating this 790 // code as dead... 791 BOOST_CHECK_THROW(file_path(split_mt2) == file_path(), std::logic_error); 792} 793 794static void check_bk_normalizes_to(char * before, char * after) 795{ 796 bookkeeping_path bp(bookkeeping_root / before); 797 L(F("normalizing %s to %s (got %s)") % before % after % bp); 798 BOOST_CHECK(bp.as_external() == after); 799 BOOST_CHECK(bookkeeping_path(bp.as_internal()).as_internal() == bp.as_internal()); 800} 801 802static void test_bookkeeping_path() 803{ 804 char const * baddies[] = {"/foo", 805 "foo//bar", 806 "foo/../bar", 807 "../bar", 808 "foo/bar/", 809 "foo/./bar", 810 "./foo", 811 ".", 812 "..", 813 "c:\\foo", 814 "c:foo", 815 "c:/foo", 816 "", 817 "a:b", 818 0 }; 819 std::string tmp_path_string; 820 821 for (char const ** c = baddies; *c; ++c) 822 { 823 L(F("test_bookkeeping_path baddie: trying '%s'") % *c); 824 BOOST_CHECK_THROW(bookkeeping_path(tmp_path_string.assign(*c)), std::logic_error); 825 BOOST_CHECK_THROW(bookkeeping_root / tmp_path_string.assign(*c), std::logic_error); 826 } 827 BOOST_CHECK_THROW(bookkeeping_path(tmp_path_string.assign("foo/bar")), std::logic_error); 828 BOOST_CHECK_THROW(bookkeeping_path(tmp_path_string.assign("a")), std::logic_error); 829 830 check_bk_normalizes_to("a", "MT/a"); 831 check_bk_normalizes_to("foo", "MT/foo"); 832 check_bk_normalizes_to("foo/bar", "MT/foo/bar"); 833 check_bk_normalizes_to("foo/bar/baz", "MT/foo/bar/baz"); 834} 835 836static void check_system_normalizes_to(char * before, char * after) 837{ 838 system_path sp(before); 839 L(F("normalizing '%s' to '%s' (got '%s')") % before % after % sp); 840 BOOST_CHECK(sp.as_external() == after); 841 BOOST_CHECK(system_path(sp.as_internal()).as_internal() == sp.as_internal()); 842} 843 844static void test_system_path() 845{ 846 initial_abs_path.unset(); 847 initial_abs_path.set(system_path("/a/b"), true); 848 849 BOOST_CHECK_THROW(system_path(""), informative_failure); 850 851 check_system_normalizes_to("foo", "/a/b/foo"); 852 check_system_normalizes_to("foo/bar", "/a/b/foo/bar"); 853 check_system_normalizes_to("/foo/bar", "/foo/bar"); 854 check_system_normalizes_to("//foo/bar", "//foo/bar"); 855#ifdef WIN32 856 check_system_normalizes_to("c:foo", "c:foo"); 857 check_system_normalizes_to("c:/foo", "c:/foo"); 858#else 859 check_system_normalizes_to("c:foo", "/a/b/c:foo"); 860 check_system_normalizes_to("c:/foo", "/a/b/c:/foo"); 861 check_system_normalizes_to("c:\\foo", "/a/b/c:\\foo"); 862 check_system_normalizes_to("foo:bar", "/a/b/foo:bar"); 863#endif 864 // we require that system_path normalize out ..'s, because of the following 865 // case: 866 // /work mkdir newdir 867 // /work$ cd newdir 868 // /work/newdir$ monotone setup --db=../foo.db 869 // Now they have either "/work/foo.db" or "/work/newdir/../foo.db" in 870 // MT/options 871 // /work/newdir$ cd .. 872 // /work$ mv newdir newerdir # better name 873 // Oops, now, if we stored the version with ..'s in, this working directory 874 // is broken. 875 check_system_normalizes_to("../foo", "/a/foo"); 876 check_system_normalizes_to("foo/..", "/a/b"); 877 check_system_normalizes_to("/foo/bar/..", "/foo"); 878 check_system_normalizes_to("/foo/..", "/"); 879 // can't do particularly interesting checking of tilde expansion, but at 880 // least we can check that it's doing _something_... 881 std::string tilde_expanded = system_path("~/foo").as_external(); 882#ifdef WIN32 883 BOOST_CHECK(tilde_expanded[1] == ':'); 884#else 885 BOOST_CHECK(tilde_expanded[0] == '/'); 886#endif 887 BOOST_CHECK(tilde_expanded.find('~') == std::string::npos); 888 // and check for the weird WIN32 version 889#ifdef WIN32 890 std::string tilde_expanded2 = system_path("~this_user_does_not_exist_anywhere").as_external(); 891 BOOST_CHECK(tilde_expanded2[0] = '/'); 892 BOOST_CHECK(tilde_expanded2.find('~') == std::string::npos); 893#else 894 BOOST_CHECK_THROW(system_path("~this_user_does_not_exist_anywhere"), informative_failure); 895#endif 896 897 // finally, make sure that the copy-from-any_path constructor works right 898 // in particular, it should interpret the paths it gets as being relative to 899 // the project root, not the initial path 900 working_root.unset(); 901 working_root.set(system_path("/working/root"), true); 902 initial_rel_path.unset(); 903 initial_rel_path.set(file_path_internal("rel/initial"), true); 904 905 BOOST_CHECK(system_path(system_path("foo/bar")).as_internal() == "/a/b/foo/bar"); 906 BOOST_CHECK(!working_root.used); 907 BOOST_CHECK(system_path(system_path("/foo/bar")).as_internal() == "/foo/bar"); 908 BOOST_CHECK(!working_root.used); 909 BOOST_CHECK(system_path(file_path_internal("foo/bar"), false).as_internal() 910 == "/working/root/foo/bar"); 911 BOOST_CHECK(!working_root.used); 912 BOOST_CHECK(system_path(file_path_internal("foo/bar")).as_internal() 913 == "/working/root/foo/bar"); 914 BOOST_CHECK(working_root.used); 915 BOOST_CHECK(system_path(file_path_external(std::string("foo/bar"))).as_external() 916 == "/working/root/rel/initial/foo/bar"); 917 file_path a_file_path; 918 BOOST_CHECK(system_path(a_file_path).as_external() 919 == "/working/root"); 920 BOOST_CHECK(system_path(bookkeeping_path("MT/foo/bar")).as_internal() 921 == "/working/root/MT/foo/bar"); 922 BOOST_CHECK(system_path(bookkeeping_root).as_internal() 923 == "/working/root/MT"); 924 initial_abs_path.unset(); 925 working_root.unset(); 926 initial_rel_path.unset(); 927} 928 929static void test_access_tracker() 930{ 931 access_tracker<int> a; 932 BOOST_CHECK_THROW(a.get(), std::logic_error); 933 a.set(1, false); 934 BOOST_CHECK_THROW(a.set(2, false), std::logic_error); 935 a.set(2, true); 936 BOOST_CHECK_THROW(a.set(3, false), std::logic_error); 937 BOOST_CHECK(a.get() == 2); 938 BOOST_CHECK_THROW(a.set(3, true), std::logic_error); 939} 940 941void add_paths_tests(test_suite * suite) 942{ 943 I(suite); 944 suite->add(BOOST_TEST_CASE(&test_null_name)); 945 suite->add(BOOST_TEST_CASE(&test_file_path_internal)); 946 suite->add(BOOST_TEST_CASE(&test_file_path_external_no_prefix)); 947 suite->add(BOOST_TEST_CASE(&test_file_path_external_prefix_a_b)); 948 suite->add(BOOST_TEST_CASE(&test_split_join)); 949 suite->add(BOOST_TEST_CASE(&test_bookkeeping_path)); 950 suite->add(BOOST_TEST_CASE(&test_system_path)); 951 suite->add(BOOST_TEST_CASE(&test_access_tracker)); 952} 953 954#endif // BUILD_UNIT_TESTS Archive Download this file Branches Tags Quick Links:     www.monotone.ca    -     Downloads    -     Documentation    -     Wiki    -     Code Forge    -     Build Status
__label__pos
0.935038
How do you write an equation of a line going through (0,7), (3,5)? 1 Answer Apr 14, 2017 #2x+3y=21# Explanation: A line going through the points #(0,7)# and #(3,5)# has a slope of #color(white)("XXX")color(green)m=(Deltay)/(Deltax)=(7-5)/(0-3)=color(grenn)(-2/3)# The general slope-point form for a linear equation is #color(white)("XXX")y-color(blue)(y_1)=color(green)m(x-color(red)(x_1))# for a line through the point #(color(red)(x_1),color(blue)(y_1))# and with a slope of #color(green)(m)# We have already determined #color(green)m=color(green)(-2/3)# and we can arbitrarily select either of the given points for #(color(red)(x_1),color(blue)(y_1))# For demonstration purposes, I will use #(color(red)(x_1),color(blue)(y_1))=(color(red)0,color(blue)7)# So our slope-point form becomes #color(white)("XXX")y-color(blue)7=color(green)(-2/3)(x-color(red)0)# While this is a perfectly valid answer, it is common to convert this into "standard form": #Ax+By=C#, with #A,B,C in ZZ, A>=0# #color(white)("XXX")y-7=-2/3(x-0)# #color(white)("XXX")rarr 3y-21=-2x# #color(white)("XXX")rarr 2x+3y=21#
__label__pos
0.995505
跳转到主内容 A1706/EMC 3071—于2016年11月上市,这台13''MacBook Pro带有OLED触摸条。以双核“Skylake”英特尔酷睿i5 CPU和4个雷电3端口为特征。 201 个问题 查看全部 Why is my MacBook Pro getting hot? There’s a metal component on the logic board that appears to be getting hot. I’m not sure if it’s an issue with the heat sink. This has started happening after I spilled liquid over the keyboard. I then opened it up and cleaned the logic board and all the parts. I am considering replacing the I/O ports/boards. Can anyone point out why this could be happening or what that metal component is? I believe it’s the metal covering the RAM. Update (08/24/2018) Still trying to solve this one. The RAM appears to be getting very hot. Block Image How do I remove the metal protectors? 回答此问题 我也有这个问题 这是一个好问题吗? 得分 0 添加一条评论 1个回答 Hey chrono If water damage has occurred then you may be in the process of shorting a component out. You should take the lid off the back of the MacBook and remove all the covers from all the parts. You will need to clean and let dry al the covers and components that could have been at risk. If overheating or burning is occuring this is a part on the Logic Board being damaged. 这个答案有帮助吗? 得分 0 评论: It’s been several months since the accident happened. Do you think the cleaning process can still be done? Do you have any guides/tutorials on taking covers off from the parts? Will I need to do a stronger cleaning process than isopropyl alcohol? 完成的 Not necessarily. If the accident was minor isopropyl alcohol will work just fine, you may need to take every piece off to clean. They should be screwed in to the Logic Board itself with small torx screws. 完成的 Thanks for your response! Do you think it's fair to say that the heat think is working fine? Not %100 how to diagnose the heat sink. 完成的 Yes it is, if the heat sink wasn't working right the MacBook wouldn't run since the sensors would shut the Mac down 完成的 添加一条评论 添加你的答案 chrono 将永远感激不已 浏览统计数据: 过去的24小时: 0 过去的7天: 0 过去的30天: 3 总计 322
__label__pos
0.965468
Visor de contenido web Visor de contenido web Computer Science The 1932 appointment of mathematician Warren Weaver as director for the Natural Sciences marked the expansion of Rockefeller Foundation (RF) endeavors to the newly emerging field of computer science. The RF had long invested in physics and chemistry through the work of the International Education Board (IEB) in the 1920s and its own support for National Research Council fellowships. In the 1930s, as Weaver developed targeted programs of research, advances in mechanical calculating devices showed significant promise for machine solutions to complex problems that Weaver and his staff quickly recognized. “A Roomful of Brains”[1] In the 1930s, the Foundation underwrote Vannevar Bush’s massive analog computer at the Massachusetts Institute of Technology (MIT), which was finally dedicated in 1942 as the Rockefeller Differential Analyzer. One of the largest and last of the mechanical analog computers, it weighed 100 tons, used 2,000 vacuum tubes and 150 motors. Dubbed “analog” because it carried out an analogy of a real physical process, the machine was touted as revolutionary in mechanized calculus. It operated in real time and, unlike later digital computers, did not require a program to translate high language to binary machine language. A typical record as produced by the output recorder of the M.I.T. Differential Analyzer A typical record as produced by the output recorder of the M.I.T. Differential Analyzer During World War II, the differential analyzer was used for computations of missile trajectories. But the war also accelerated the obsolescence of the differential analyzer. War work--in which Bush, Weaver, and scores of researchers trained on the differential analyzer participated at various levels--pushed digital technologies forward quickly. The RF responded to these developments in two ways. It continued to seek uses for the analog computer, the most ambitious of which was a project that computed market variables in the economies of developing countries. It also funded an experimental mathematics group at MIT to develop digital computing. The group was soon underwritten by the deeper pockets of the United States Navy’s Whirlwind Program. Merging Biology and Mathematics In the 1940s, RF funding helped mathematician Norbert Wiener develop the new field of cybernetics. Weiner published his landmark book, Cybernetics, in 1948, after the Foundation’s grant to his joint project with cardiologist Arturo Rosenbleuth to study feedback loops in the human body. His work with Rosenbleuth led Wiener to broader speculations about similarities between the communications and control systems in machines and those in natural and biological systems. Natural Sciences director Weaver was also involved with a breakthrough book of the same era. In 1949, he co-authored the landmark text entitled The Mathematical Theory of Communication with Claude Shannon. The book remains essential to the understanding of information theory. Weaver’s introductory chapters are widely credited with making Shannon’s higher mathematical theories accessible to a more general audience. Machine Learning In the summer of 1956, the RF funded a milestone: the Dartmouth Conference on Artificial Intelligence. The five-week gathering engaged a group of leading researchers who were a veritable “who’s who” of computing innovators, including:  John McCarthy, founder of the Stanford Artificial Intelligence Lab; Marvin Minsky, who developed the idea of neuron nets and founded the MIT Computer Science and Artificial Intelligence Lab; Claude E. Shannon, pioneer of Information Theory; and Nathan Rochester, developer of the IBM 701, the first general purpose, mass-produced computer. MIT computation center, 1957 MIT computation center, 1957 The field was so new that mathematician John McCarthy had to invent a new term to help explain the concept of machine learning. In fact, the first known use of the term, “artificial intelligence,” was in the proposal that McCarthy (with co-authors Shannon, Minsky, and Rochester) submitted to the RF. At the time, the implications of the field were understood only by a handful of researchers. Even Weaver, an insider to applied mathematics, tempered his support for the proposal with restraint, although he authorized RF Director of Biological and Medical Research Robert S. Morison to act as he saw fit. McCarthy requested $13,500 for a two-month conference, but Morison offered $7,500 for five weeks. As Morison explained, I hope you won't feel we are being overcautious but the general feeling here is that this new field of mathematical models for thought, though very challenging for the long run, is still difficult to grasp very clearly. This suggests a modest gamble for exploring a new approach, but there is a great deal of hesitancy about risking any very substantial amount at this stage.[2] The significance of this new field, while murky to outsiders, was immediately evident to practitioners. IBM, Bell Laboratories, and RAND all lent support for their key researchers to attend. Today, the conference remains widely acknowledged as the founding moment of artificial intelligence and has influenced research and development in mathematics, engineering, linguistics, computer science and psychology ever since. Its wide range of topics included complexity theory, language simulation, neuron nets, abstraction of content from sensory inputs, and the relationship of randomness to creative thinking. Beyond subject matter, the Dartmouth conference signaled a major paradigm shift in how mathematical work would happen in the computer age. Essential to the conference was bringing researchers with disparate specialties together, freeing them from the strictures of teaching and publishing within their own subfields, and offering time and opportunity for collaboration and exchange of ideas and information. The Dartmouth conference proved to be the high water mark of RF funding in the field of computer science. With military and corporate research increasingly driving the computer industry in the coming years, RF funding became less and less necessary. But the seeds the Foundation planted proved to be crucial to many later advances in the field. [1] Gray, George, W., “A Roomful of Brains,” December 17, 1937, Rockefeller Archive Center (RAC), RG 1.1, Series 224.D, Box 2, Folder 23. [2] Robert S. Morison to John McCarthy, November 30, 1955, RAC, RG 1.2, Series 200.D, Box 26, Folder 219. Visor de contenido web Visor de contenido web
__label__pos
0.605821
/* -*- mode: Java; c-basic-offset: 4; indent-tabs-mode: nil; -*- //------100-columns-wide------>|*/ /* * Copyright (c) 2002 Extreme! Lab, Indiana University. All rights reserved. * * This software is open source. See the bottom of this file for the licence. * * $Id: XppCountMidlet.java,v 1.3 2003/04/06 00:04:01 aslom Exp $ */ package midlet; import java.io.*; import javax.microedition.lcdui.*; import javax.microedition.midlet.*; import org.gjt.xpp.*; import shared.count.ProgressObserver; import shared.count.ReaderFactory; import shared.count.XppCount; /** * Simple Midlet to show XppCount working. */ public class XppCountMidlet extends MIDlet implements CommandListener, ProgressObserver, ReaderFactory, Runnable { private final static boolean DEBUG = false; Command exitCommand = new Command("Exit", Command.EXIT, 2); Command parseCommand = new Command("Parse", Command.SCREEN, 1); Command speedCommand = new Command("Test", Command.SCREEN, 1); Command okCommand = new Command("OK", Command.OK, 1); Display display; Form welcomeForm; TextField inputXml; TextBox parsingOutput; //Form outputForm; //TextField outputParsing; boolean stopped = true; boolean finishRun; char[] xmlContent; XppCount driver; Thread runner; public XppCountMidlet() {} private void debug(String msg) { // notice that J2ME output is at leat erratic.... // CAN NOT be depended for real debugging... if(DEBUG) System.err.println(msg); } public void startApp() throws MIDletStateChangeException { welcomeForm = new Form("XPP2 DEMO"); inputXml = new TextField( "Enter XML", "World! 10", 1000, TextField.ANY); welcomeForm.append(inputXml); welcomeForm.addCommand(parseCommand); welcomeForm.addCommand(exitCommand); welcomeForm.setCommandListener(this); // outputForm = new Form("Output"); // outputParsing = new TextField( // "", "", 1000, TextField.ANY); // outputForm.append(outputParsing); // outputForm.addCommand(okCommand); // outputForm.setCommandListener(this); parsingOutput = new TextBox( "Output", "", 1000, TextField.ANY); parsingOutput.setCommandListener(this); parsingOutput.addCommand(okCommand); parsingOutput.setCommandListener(this); display = Display.getDisplay(this); display.setCurrent(welcomeForm); driver = new XppCount(); runner = new Thread(this); finishRun = false; runner.start(); } public void pauseApp() { stopped = true; //display = null; } public void destroyApp(boolean unconditional) throws MIDletStateChangeException { finishRun = stopped = true; if (unconditional) { notifyDestroyed(); } else { } } public void commandAction(Command c, Displayable s) { try { debug("command="+c); if (c == exitCommand) { debug("EXIT pressed"); destroyApp(false); notifyDestroyed(); } else if (c == parseCommand) { debug("PARSE pressed"); parsingOutput.setString("Parsing...\n"); //outputParsing.setString("Parsing...\n"); //print("Now parsing started..."); display.setCurrent(parsingOutput); //display.setCurrent(outputForm); //transfer user input so be acccessd by newReader() xmlContent = new char[inputXml.size()]; inputXml.getChars(xmlContent); if(DEBUG) println("parsing requested"); debug("parsing requested"); //XmlPullParserFactory factory = XmlPullParserFactory.newInstance(); //XmlPullParser pp = factory.newPullParser(); //pp.setInput(xmlContent); // while(!stopped) { // byte event = pp.next(); // int last = parsingOutput.size(); // parsingOutput.insert("X", last); // } stopped = false; //runTest(); // runner Thread will pick it up //runner.interrupt(); // not supported in J2ME Thread.currentThread().yield(); } else if (c == okCommand) { debug("OK pressed"); debug("parsing stopped"); stopped = true; display.setCurrent(welcomeForm); //runner.interrupt(); // not supported in J2ME Thread.currentThread().yield(); } else { debug("Unknown command "+c); } } catch (Exception ex) { debug(ex.toString()); ex.printStackTrace(); } } public void print(String msg) { int last = parsingOutput.size(); parsingOutput.insert(msg, last); // int last = outputParsing.size(); // outputParsing.insert(msg, last); } public void println(String msg) { print(msg+"\n"); } public void trace(String msg) { } public void verbose(String msg) { //println(msg); } public boolean stopRequested() { return stopped; } public Reader newReader() throws IOException { Reader r = new MyCharArrayReader(xmlContent); return r; } public void runTest() { try{ if(stopped == false) { debug("paring started..."); if(DEBUG) println("paring started..."); System.err.println("runtTest paring start"); driver.runTest("INTERNAL", this, this); stopped = true; debug("parsing finished"); if(DEBUG) println("paring finished."); } } catch(XmlPullParserException ex) { stopped = true; println("\nParsing error: "+ex.getMessage()); ex.printStackTrace(); } catch(java.io.IOException ex) { stopped = true; println("\nIO error: "+ex.getMessage()); ex.printStackTrace(); } catch(java.lang.Exception ex) { stopped = true; println("\nGeneric error: "+ex.getMessage()); ex.printStackTrace(); } } public void run() { // try { // XmlPullParserFactory factory = XmlPullParserFactory.newInstance(); // XmlPullParser pp = factory.newPullParser(); // char[] data = new char[inputXml.size()]; // inputXml.getChars(data); // pp.setInput(data); // while(!stopped) { // byte event = pp.next(); // int last = parsingOutput.size(); // parsingOutput.insert("X", last); // } int count = 0; while(finishRun == false) { try { Thread.sleep(200); } catch(InterruptedException ex) { } //debug("RUN "+(count++)+" stopped="+stopped); //if(DEBUG) println("RUN "+(count++)+" "+stopped); runTest(); } } public class MyCharArrayReader extends Reader { char[] buf; int pos; public MyCharArrayReader(char[] buf) { this.buf = buf; pos = 0; } public int read(char[] cbuf, int off, int len) throws IOException { debug( "buf="+buf+" cbuf="+cbuf+" off="+off+" len="+len); int toRead = len; if(pos + toRead > buf.length) { toRead = buf.length - pos; } if(toRead == 0) return -1; System.arraycopy(buf, pos, cbuf, off, toRead); pos += toRead; return toRead; } public void close() throws IOException { if(pos >= buf.length) { throw new IOException( "attempt to close already closed reader"); } pos = buf.length; } } } /* * Indiana University Extreme! Lab Software License, Version 1.2 * * Copyright (C) 2002 The Trustees of Indiana University. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * 1) All redistributions of source code must retain the above * copyright notice, the list of authors in the original source * code, this list of conditions and the disclaimer listed in this * license; * * 2) All redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the disclaimer * listed in this license in the documentation and/or other * materials provided with the distribution; * * 3) Any documentation included with all redistributions must include * the following acknowledgement: * * "This product includes software developed by the Indiana * University Extreme! Lab. For further information please visit * http://www.extreme.indiana.edu/" * * Alternatively, this acknowledgment may appear in the software * itself, and wherever such third-party acknowledgments normally * appear. * * 4) The name "Indiana University" or "Indiana University * Extreme! Lab" shall not be used to endorse or promote * products derived from this software without prior written * permission from Indiana University. For written permission, * please contact http://www.extreme.indiana.edu/. * * 5) Products derived from this software may not use "Indiana * University" name nor may "Indiana University" appear in their name, * without prior written permission of the Indiana University. * * Indiana University provides no reassurances that the source code * provided does not infringe the patent or any other intellectual * property rights of any other entity. Indiana University disclaims any * liability to any recipient for claims brought by any other entity * based on infringement of intellectual property rights or otherwise. * * LICENSEE UNDERSTANDS THAT SOFTWARE IS PROVIDED "AS IS" FOR WHICH * NO WARRANTIES AS TO CAPABILITIES OR ACCURACY ARE MADE. INDIANA * UNIVERSITY GIVES NO WARRANTIES AND MAKES NO REPRESENTATION THAT * SOFTWARE IS FREE OF INFRINGEMENT OF THIRD PARTY PATENT, COPYRIGHT, OR * OTHER PROPRIETARY RIGHTS. INDIANA UNIVERSITY MAKES NO WARRANTIES THAT * SOFTWARE IS FREE FROM "BUGS", "VIRUSES", "TROJAN HORSES", "TRAP * DOORS", "WORMS", OR OTHER HARMFUL CODE. LICENSEE ASSUMES THE ENTIRE * RISK AS TO THE PERFORMANCE OF SOFTWARE AND/OR ASSOCIATED MATERIALS, * AND TO THE PERFORMANCE AND VALIDITY OF INFORMATION GENERATED USING * SOFTWARE. */
__label__pos
0.999616
311 reputation 111 bio website location Rome, Italy age 34 visits member for 3 years, 7 months seen Oct 17 at 11:47 "Against stupidity, the Gods themselves fight in vain." LinkedIn Aug 19 awarded  Nice Question Aug 16 awarded  Popular Question May 1 comment Why do malware creators use such clever technologies for such silly purposes? Well, of course a total system wipe and reinstall followed soon after. Which, again, it wouldn't have... if the infection was hidden instead of blatantly obvious. Jan 9 awarded  Nice Question Aug 21 awarded  Caucus Mar 7 awarded  Yearling Feb 14 comment Why are hash functions one way? If I know the algorithm, why can't I calculate the input from it? Agreed. But the question is asking "why can't it just reverse the process to calculate the password from the hash", not "how can I match the hash even if don't know the password". Feb 14 answered Why are hash functions one way? If I know the algorithm, why can't I calculate the input from it? Jul 28 awarded  Teacher Jul 28 answered Do we really need an antivirus? if yes, what for? And how do they activate and enter in a PC? Jul 13 accepted Why do malware creators use such clever technologies for such silly purposes? Jul 13 awarded  Commentator Jul 13 comment Why do malware creators use such clever technologies for such silly purposes? Sure. But mine was just an example, there are many more useful things a malware could do than displaying ads... which is also the worst thing it can do as long as "going unnoticed" is concerned. Jul 13 comment Why do malware creators use such clever technologies for such silly purposes? I think you misunderstood my question. It was more along the lines of "why do they make the presence of malware obvious by showing ads when they could just have it sit there and steal your credit card number unnoticed?". Jul 13 awarded  Scholar Jul 13 comment Unknown malware, how to report it and whom to report it to? @Rory: this is making me wonder enough to open a different question; see here: security.stackexchange.com/questions/5244/…. Jul 13 accepted Unknown malware, how to report it and whom to report it to? Jul 13 asked Why do malware creators use such clever technologies for such silly purposes? Jul 13 comment Unknown malware, how to report it and whom to report it to? They are becoming increasingly cleaver, aren't they? :-/ Wonder why they keep using these things only for adware... such a rootkit would have gone undetected forever if it wasn't for those search hijacking and iexplore.exe background processes. Jul 13 comment Unknown malware, how to report it and whom to report it to? Agreed that I should have saved evidence for analysis; I was a bit concerned about removing the infection, though... and, anyway, besides the boot sector, I still can't find (manually or through scanners) any offending file.
__label__pos
0.658664
Thursday, September 14, 2017 POWER OF QUANTUM COMPUTERS It is clear that when it comes to solving numerical search problems like Integer Factorization, quantum computers allow us to find the solution(s) instantly. We just setup the problem (multiply two unknown integers and get an unknown integer result, set the unknown result to a result we want) and instantly the input integers become known. So quantum computers are infinitely more powerful than regular computers for solving numerical search problems. But we use regular computers also for symbolic calculation. (CAS (Computer Algebra System) software like Mathematica, Maple etc.) What more quantum computers could provide when it comes to symbolic calculation? I think they could provide the same benefit as for numerical calculation. Meaning instantly solving symbolic search problems. Imagine if we could just setup an equation expression string as input, then quantum computer sets the output string (with unknown value) to a general solution expression (known value), if such solution really exists/possible. For example: 1) Input string: "a*x^0+b*x^1=0" String value search problem: "x=?" Output string: "-a/b" 2) Input string: "a*x^0+b*x^1+c*x^2=0" String value search problem: "x=?" Output string: "(-b+(b^2-4*a*c)^(1/2))/(2*a)" I think using quantum computers for symbolic calculation should allow us solving many important such problems which we cannot solve with regular computers in a practical time. I am guessing those would even include some Millenium Prize Problems like finding (all) general solution expressions for Navier-Stokes equations (and proving Riemann Hypothesis?). I think, assuming we will have a general purpose suitable quantum computer someday, only issue is figuring out exactly how to express and solve symbolic calculation problems like the two examples above. Let's try to solve the first problem using a quantum computer: Assuming quantum computer symbolic calculated the solution (expression string E), how we could test it to be correct or not? How about creating an equation that would be true only if E is a valid solution, which is the input equation itself, then: "a*E^0+b*E^1=0" or "a+b*E=0" Then I think the solution algorithm for the quantum computer would be: Start with unknown values E, a, b. Calculate a+b*E (not numerical calculation but symbolic expression calculation, using an expression tree). Set the unknown calculation result to 0. Unknown string E collapses to the answer: "-a/b" And if we consider how we could do the symbolic calculation step above using a regular computer, which requires manipulating an expression tree using stack(s), then we need figure out how to create a quantum stack using a quantum computer. (Imagine a stack that can do any number of push/pop operations instantly, to collapse into its final known state instantly.) (If we could do quantum stacks, then we also could do quantum queues.) (And then quantum versions of other standard programming data structures would also be possible.) What could be the most practical way to build a large scale quantum computer? I think currently building a quantum computer is really hard because our physical world is highly noisy at quantum scale. Imagine using single atoms/molecules as qubits. Imagine cooling them close to absolute zero in vacuum environment that needs to be perfectly maintained. Could there be a better way? What if we create a quantum computer in a different level of reality, which does not have noise? Think about our regular digital computers. Could we think of the bit values in memory of a working regular computer, like a different level of reality of quasiparticles, which does not have noise? Can we create an extrinsic-semiconductor-based quantum computer chip, that creates and processes qubits as quasiparticles? (And the quantum computer designed and operated like a Cellular Automata, similar to Wireworld?) https://en.wikipedia.org/wiki/Quasiparticle https://en.wikipedia.org/wiki/Electron_hole https://en.wikipedia.org/wiki/Extrinsic_semiconductor https://en.wikipedia.org/wiki/Cellular_automaton https://en.wikipedia.org/wiki/Wireworld Continuum Hypothesis is False Continuum hypothesis states "There is no set whose cardinality is strictly between that of the integers and the real numbers". Resolution: Express each set in question, as a set of points on (ND) Euclidean space, and calculate their fractal dimension to compare their cardinality => Set of all integers => Fractal Dimension=0 Set of all real numbers => Fractal Dimension=1 Set of all complex numbers => Fractal Dimension=2 Set of all quaternion numbers => Fractal Dimension=4 Set of all octonion numbers => Fractal Dimension=8 Set of all sedenion numbers => Fractal Dimension=16 Set of all points of a certain fractal => Fractal Dimension: Cantor set: 0.6309 Koch curve: 1.2619 Sierpinski triangle: 1.5849 Sierpinski carpet: 1.8928 Pentaflake: 1.8617 Hexaflake: 1.7712 Hilbert curve: 2 Tuesday, September 5, 2017 EXPLAINING DARK ENERGY AND DARK MATTER If Universe/Reality (U/R) is a Cellular Automata (CA) (Quantum Computer (QC)), operating at Planck Scale (PS), then how it could explain Dark Energy (DE) and Dark Matter (DM)? Assume Quantum Physics (QP) is its first Macro Scale (MS) Emergent Property (EP), assume Relativity Physics (RP) is its second MS EP, then Dark (Energy & Matter) Physics (DP) could be its third MS EP! (Just like for example, Newton (Navier-Stokes) Physics (NP) is the first Macro Scale (MS) Emergent Property (EP) of some CA, like FHP and LBM.) Is the ratio of DM to Matter (DM/M) is always (everywhere and everywhen) constant in the Universe? Is the ratio of DE to Vacuum Energy (DE/VE) is always (everywhere and everywhen) constant in the Universe? (If so, could they be a consequence of DP being what is said above?) Is every EP has a finite scale range? (Are fluid simulation CA (like FHP/LBM) have a second layer of EP at super-macro scale (where NP no longer apply)?) Wednesday, August 16, 2017 NATURE OF TIME Concept of “now” being relative implies unchanging 4D “Block Universe” (so future is predictable) and it comes from Relativity. But QM says the opposite (future is unpredictable (only there is a certain probability for any future event)). As we look at the Universe/reality starting at microscale (particle size) and go to macroscale, future events become more and more certain. For example, think of how certain things you plan to do tomorrow: Can’t we say they are not perfectly certain but close? But also think of how certain motion of Earth in its orbit tomorrow. Isn’t it much more certain (but still not perfectly certain)? Future being unpredictable in microscale and later becoming more and more predictable at higher and higher scales also happens in Cellular Automata (which used for fluid simulation). I think one clear implication of future becoming more and more predictable at higher and higher scales is that, time must be an emergent property. Which in turn implies spacetime must be an emergent property. Which in turn implies Relativity must be an emergent property. I think I had read somewhere that equations of GR is similar to equations of some kind of (non-viscous?) fluid. If so it would make sense considering Cellular Automata used for fluid simulation shows similar behavior to GR. I just came across a part of an article from Scientific American September 2015 that says something very similar to what I had said about nature of time: “Whenever people talk about a dichotomy, though, they usually aim to expose it as false. Indeed, many philosophers think it is meaningless to say whether the universe is deterministic or indeterministic. It can be either, depending on how big or complex your object of study is: particles, atoms, molecules, cells, organisms, minds, communities. “The distinction between determinism and indeterminism is a level-specific distinction,” says Christian List, a philosopher at the London School of Economics and Political Science. “If you have determinism at one particular level, it is fully compatible with indeterminism, both at higher levels and at lower levels.” The atoms in our brain can behave in a completely deterministic way while still giving us freedom of action because atoms and agency operate on different levels. Likewise, Einstein sought a deterministic subquantum level without denying that the quantum level was probabilistic.” (All my comments above also published here: http://scienceblogs.com/startswithabang/2017/08/13/comments-of-the-week-172-from-sodium-and-water-to-the-most-dangerous-comet-of-all/) If the future (time) becomes more and more certain as we go from microscale to macroscale, here is a thought experiment for determining how exactly that happens: Imagine in a vacuum chamber we dropped a single neutral Carbon atom from a certain height so many times and measured/determined how close it will hit the center of the target (circular) area with how much probability. And later we repeated the experiment with C60 molecules. And later we repeated the experiment with solid balls of 60 C60 molecules. And later we repeated the experiment with solid balls of 3600 C60 molecules. ... I think what would happen is bigger and bigger solid balls would hit closer and closer to the center with higher and higher probabilities. And general graph (an exponential curve?) of the results would tell us how exactly future (time) becomes more and more certain. A more advanced version of the thought experiment could be this: Imagine we started the experiment with micro balls and with a very small drop height. And as the radius of the solid balls gets bigger and bigger, we increased the drop distance with the same size increase ratio as radius. Monday, August 7, 2017 FUTURE OF PHYSICS If we look at history of physics, is there a clear trend to allow us to guess its future? What are the major milestones in physics history? I think it could be said: 1) Ancient Greece (level) Physics 2) Galileo (level) Physics 3) Newton (level) Physics 4) Einstein (level) Physics 5) TOE (level) Physics(?) I think there is indeed a clear trend if you think about it. Each new revolution in physics brings something like an order of magnitude increase in complexity of math (calculations), not just a new theory. So I would guess doing calculations to solve physics problems using TOE will be practically impossible using pen and paper only. I think it will require a (quantum) computer. (Realize that all physics problems (where answer is possible) can be solved today using non-quantum (super) computers/calculators/pen&paper.) I think if Universe (or Reality) turns out to be a Cellular Automata design running on an ND matrix qubit (register) quantum computer (with Planck scale cells) then it would fit into above guess about future of physics (TOE) perfectly. Monday, July 31, 2017 Physics Of Star Trek I saw maybe all Star Trek TV show episodes and movies. Below I will try to provide more plausible ways of realizing similar technologies according to known laws of physics of our Universe. I do not know if similar explanations were provided by anyone before. Super Energy Sources: They could be portable fusion reactors which are almost perfectly efficient. They could provide continuous power (similar to DC) or as repeating pulses (similar to AC). There maybe super batteries that store a dense cloud of electron gas in vacuum (or as a BEC?)? Stun guns: Imagine a super powerful gun creates conductive paths in air using UV pulse/continuous lasers, momentarily. It sends a powerful electroshock to the target from those conductive paths. (I think this tech is already developing currently.) Teleportation: Imagine two teleportation machines (chambers). The sender machine creates some kind of quantum shock wave that instantly destroys the target object into gamma photons that carry the same quantum information. That information sent to the receiver machine which has a giant BEC (that is made of same kind of atoms/molecules with same proportions as the target object?). When the information is applied to the BEC (instantly, like a quantum shock wave), it somehow instantly quantum mechanically collapses into an exact copy of the object. Phasers: Instantly destroys the target object using similar quantum shock wave that used in teleportation. (Target object instantly gets destroyed similar to teleportation, but there is no receiver for its quantum information.) Artificial Gravity: Imagine if we had small coils that can create high level positive/negative spacetime curvatures around them (spherical/cylindrical). We could place a grid of those coils under floors etc to create artificial gravity. Force Fields: Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils, and also a dense grid of (superconductor) coils that can create (+/-) electric/magnetic fields. Would not be possible to use them to create "force fields" all around the spaceships to deflect any (atom/particle/photon) kind of attack? Cloaking Fields: Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils. Would not be possible to use them to create a photon deflection field all around the spaceships? Warp Speed: Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils. Would not be possible to use them to create a warp bubble all around the spaceships to act like an Alcubierre Drive? Sub-space Communication: (Since we assume we have ability to manipulate the curvature of spacetime) Imagine we have tech to create micro worm holes as twins and able to trap them indefinitely. A communication signal enters to either one and instantly comes out of the other one. Each time we create a new set of twin micro worm holes, we keep one in a central hub on Earth, and the other carried by a spaceship or placed on a different planet/moon/space station. (The same tech could also be useful to create and trap micro Black Holes, which maybe useful as compact batteries.) Electronic Dampening Field: Imagine EMP created like a standing wave using a grid of phased array EMP generators. Spaceships with hulls that can withstand against almost any kind of attacks at least for a while if necessary: How about metallic hydrogen or another solid material that we created using ultrapressure (and temperature)? I think it is also clear that Star Trek Physics require devices with ability to create strong positive and negative spacetime curvatures for sure. How could it work according to laws and limitations of known physics, assuming they are always must be obeyed? According to General Relativity, spacetime bends in the presence of positive or negative mass/energy(/pressure/acceleration). What if we destroyed a small amount of matter/antimatter in a spot (as pulses)? (Could there be an economical way to create as much as antimatter as we need? Think about how we could easily induce a permanent magnet to permanently switch its N and S sides, by momentarily creating a strong enough reverse magnetic field using an electromagnet. Could there be any way to create a special quantum field/shockwave (using an electric and/or magnetic field generator or a laser?) that when it passes thru a sample of matter (trapped in mid-vacuum), it induces that matter to instantly switch to antimatter (so that instantly all electrons switch to positrons, all protons to anti-protons, all neutrons to anti-neutrons)?) What if we created an arbitrarily strong volume/spot of magnetic and/or electric field(s)? What if we created a spot of ultrapressure using a tech way beyond any diamond anvil? What if we created a spot of negative ultrapressure (by using pulling force)? (Imagine if we had or created a (solid?) material that is ultrastrong against pulling force (even for a moment)?) What if we had or created an ultrastrong (solid?) disk/sphere/ring and trapped it in mid-vacuum. Later we created an ultrapowerful rotational force on it (even for a moment) using ultrapowerful magnetic field. So that the object gained (even for a moment) an ultrahigh speed and/or positive/negative acceleration? Sunday, July 30, 2017 3D VOLUME SCANNER IDEA I recently learned about an innovative method to get 3D scans of objects. It overcomes line of sight problem and captures the inner shape of the object also. It looks like a robot arm dips the object into water in different orientations. Each time how water level changed over time gets measured and from these measurements 3d object shape is calculated like a CAT scan. I think these method can be improved upon greatly as follows: Imagine we put a tight metal wire ring around the object we want to scan, maybe using a separate machine. It could be a bendable but rigid, steel wire ring, or maybe bicycle wire ring, could be even a suitable kind of plastic. The object could be in any orientation, hold tight by the ring. Imagine we have an aquarium tank filled with liquid mercury (which would keep the object dry unlike water, and also tank walls so that measurements would be more precise). (Also mercury is conductive which would also make measurements easier using electronic sensor(s).) (It could also be a cylindrical tank.) Imagine inside of the tank we have a vertical bar that can move up and down a horizontal bar using electronic control. Imagine that horizontal bar at its middle (down side) has a hook/lock for the wire ring (around the object). That hook/lock has an electronically controlled motor that can rotate the wire ring (so the object) to any (vertical) angle. (To prevent the ring/object moving like a pendulum when it is dipped into liquid (fast) each time, we could add a second horizontal bar with adjustable height, that has a hook/lock for the wire ring at its middle (up side). So the ring would be hold in place from its top and bottom points by two horizontal bars.) Now imagine to take new measurements each time, we rotate the object a small and equal angular amount (within 360 degrees). Then we dip the object fully inside the liquid (at constant speed) and take it out fully back (at constant speed). Every time as we dip the object we record the changes in the liquid level in the tank over time. (While the object fully dipped we could rotate it again and then record liquid level changes while we take the object fully out back to get two sets of measurements at each cycle, instead of one.) Of course mercury is highly toxic and reacts with some metals. So it would be best to find a better liquid. The liquid would need to be non-stick to keep scanned objects, tank walls dry. Minimal viscosity and density as possible, maximal temperature range with linear volume change based on temperature, constant volume under common different air pressures would be better. Stable (non-chemically active) and non-toxic are must. Also electric conductivity would be a plus. References: https://www.sciencedaily.com/releases/2017/07/170721131954.htm http://www.fabbaloo.com/blog/2017/7/25/water-displacement-3d-scanning-will-this-work https://3dprintingindustry.com/news/3d-scanning-objects-dipping-water-118886/
__label__pos
0.96704
net_kernel.3erl - Man Page Erlang networking kernel. Description The net kernel is a system process, registered as net_kernel, which must be operational for distributed Erlang to work. The purpose of this process is to implement parts of the BIFs spawn/4 and spawn_link/4, and to provide monitoring of the network. An Erlang node is started using command-line flag -name or -sname: $ erl -sname foobar It is also possible to call net_kernel:start([foobar]) directly from the normal Erlang shell prompt: 1> net_kernel:start([foobar, shortnames]). {ok,<0.64.0>} (foobar@gringotts)2> If the node is started with command-line flag -sname, the node name is foobar@Host, where Host is the short name of the host (not the fully qualified domain name). If started with flag -name, the node name is foobar@Host, where Host is the fully qualified domain name. For more information, see erl. Normally, connections are established automatically when another node is referenced. This functionality can be disabled by setting Kernel configuration parameter dist_auto_connect to never, see kernel(6). In this case, connections must be established explicitly by calling connect_node/1. Which nodes that are allowed to communicate with each other is handled by the magic cookie system, see section Distributed Erlang in the Erlang Reference Manual. Warning: Starting a distributed node without also specifying -proto_dist inet_tls will expose the node to attacks that may give the attacker complete access to the node and in extension the cluster. When using un-secure distributed nodes, make sure that the network is configured to keep potential attackers out. See the  Using SSL for Erlang Distribution User's Guide for details on how to setup a secure distributed node. Exports allow(Nodes) -> ok | error Types: Nodes = [node()] Permits access to the specified set of nodes. Before the first call to allow/1, any node with the correct cookie can be connected. When allow/1 is called, a list of allowed nodes is established. Any access attempts made from (or to) nodes not in that list will be rejected. Subsequent calls to allow/1 will add the specified nodes to the list of allowed nodes. It is not possible to remove nodes from the list. Returns error if any element in Nodes is not an atom. connect_node(Node) -> boolean() | ignored Types: Node = node() Establishes a connection to Node. Returns true if a connection was established or was already established or if Node is the local node itself. Returns false if the connection attempt failed, and ignored if the local node is not alive. get_net_ticktime() -> Res Types: Res = NetTicktime | {ongoing_change_to, NetTicktime} | ignored NetTicktime = integer() >= 1 Gets net_ticktime (see kernel(6)). Defined return values (Res): NetTicktime: net_ticktime is NetTicktime seconds. {ongoing_change_to, NetTicktime}: net_kernel is currently changing net_ticktime to NetTicktime seconds. ignored: The local node is not alive. getopts(Node, Options) -> {ok, OptionValues} | {error, Reason} | ignored Types: Node = node() Options = [inet:socket_getopt()] OptionValues = [inet:socket_setopt()] Reason = inet:posix() | noconnection Get one or more options for the distribution socket connected to Node. If Node is a connected node the return value is the same as from inet:getopts(Sock, Options) where Sock is the distribution socket for Node. Returns ignored if the local node is not alive or {error, noconnection} if Node is not connected. monitor_nodes(Flag) -> ok | Error monitor_nodes(Flag, Options) -> ok | Error Types: Flag = boolean() Options = [Option] Option = {node_type, NodeType} | nodedown_reason NodeType = visible | hidden | all Error = error | {error, term()} The calling process subscribes or unsubscribes to node status change messages. A nodeup message is delivered to all subscribing processes when a new node is connected, and a nodedown message is delivered when a node is disconnected. If Flag is true, a new subscription is started. If Flag is false, all previous subscriptions started with the same Options are stopped. Two option lists are considered the same if they contain the same set of options. As from Kernel version 2.11.4, and ERTS version 5.5.4, the following is guaranteed: • nodeup messages are delivered before delivery of any message from the remote node passed through the newly established connection. • nodedown messages are not delivered until all messages from the remote node that have been passed through the connection have been delivered. Notice that this is not guaranteed for Kernel versions before 2.11.4. As from Kernel version 2.11.4, subscriptions can also be made before the net_kernel server is started, that is, net_kernel:monitor_nodes/[1,2] does not return ignored. As from Kernel version 2.13, and ERTS version 5.7, the following is guaranteed: • nodeup messages are delivered after the corresponding node appears in results from erlang:nodes/X. • nodedown messages are delivered after the corresponding node has disappeared in results from erlang:nodes/X. Notice that this is not guaranteed for Kernel versions before 2.13. The format of the node status change messages depends on Options. If Options is [], which is the default, the format is as follows: {nodeup, Node} | {nodedown, Node} Node = node() If Options is not [], the format is as follows: {nodeup, Node, InfoList} | {nodedown, Node, InfoList} Node = node() InfoList = [{Tag, Val}] InfoList is a list of tuples. Its contents depends on Options, see below. Also, when OptionList == [], only visible nodes, that is, nodes that appear in the result of erlang:nodes/0, are monitored. Option can be any of the following: {node_type, NodeType}: Valid values for NodeType: visible: Subscribe to node status change messages for visible nodes only. The tuple {node_type, visible} is included in InfoList. hidden: Subscribe to node status change messages for hidden nodes only. The tuple {node_type, hidden} is included in InfoList. all: Subscribe to node status change messages for both visible and hidden nodes. The tuple {node_type, visible | hidden} is included in InfoList. nodedown_reason: The tuple {nodedown_reason, Reason} is included in InfoList in nodedown messages. Reason can, depending on which distribution module or process that is used be any term, but for the standard TCP distribution module it is any of the following: connection_setup_failed: The connection setup failed (after nodeup messages were sent). no_network: No network is available. net_kernel_terminated: The net_kernel process terminated. shutdown: Unspecified connection shutdown. connection_closed: The connection was closed. disconnect: The connection was disconnected (forced from the current node). net_tick_timeout: Net tick time-out. send_net_tick_failed: Failed to send net tick over the connection. get_status_failed: Status information retrieval from the Port holding the connection failed. set_net_ticktime(NetTicktime) -> Res set_net_ticktime(NetTicktime, TransitionPeriod) -> Res Types: NetTicktime = integer() >= 1 TransitionPeriod = integer() >= 0 Res =    unchanged | change_initiated |    {ongoing_change_to, NewNetTicktime} NewNetTicktime = integer() >= 1 Sets net_ticktime (see kernel(6)) to NetTicktime seconds. TransitionPeriod defaults to 60. Some definitions: Minimum transition traffic interval (MTTI): minimum(NetTicktime, PreviousNetTicktime)*1000 div 4 milliseconds. Transition period: The time of the least number of consecutive MTTIs to cover TransitionPeriod seconds following the call to set_net_ticktime/2 (that is, ((TransitionPeriod*1000 - 1) div MTTI + 1)*MTTI milliseconds). If NetTicktime < PreviousNetTicktime, the net_ticktime change is done at the end of the transition period; otherwise at the beginning. During the transition period, net_kernel ensures that there is outgoing traffic on all connections at least every MTTI millisecond. Note: The net_ticktime changes must be initiated on all nodes in the network (with the same NetTicktime) before the end of any transition period on any node; otherwise connections can erroneously be disconnected. Returns one of the following: unchanged: net_ticktime already has the value of NetTicktime and is left unchanged. change_initiated: net_kernel initiated the change of net_ticktime to NetTicktime seconds. {ongoing_change_to, NewNetTicktime}: The request is ignored because net_kernel is busy changing net_ticktime to NewNetTicktime seconds. setopts(Node, Options) -> ok | {error, Reason} | ignored Types: Node = node() | new Options = [inet:socket_setopt()] Reason = inet:posix() | noconnection Set one or more options for distribution sockets. Argument Node can be either one node name or the atom new to affect the distribution sockets of all future connected nodes. The return value is the same as from inet:setopts/2 or {error, noconnection} if Node is not a connected node or new. If Node is new the Options will then also be added to kernel configration parameters inet_dist_listen_options and inet_dist_connect_options. Returns ignored if the local node is not alive. start([Name]) -> {ok, pid()} | {error, Reason} start([Name, NameType]) -> {ok, pid()} | {error, Reason} start([Name, NameType, Ticktime]) -> {ok, pid()} | {error, Reason} Types: Name = atom() NameType = shortnames | longnames Reason = {already_started, pid()} | term() Turns a non-distributed node into a distributed node by starting net_kernel and other necessary processes. Notice that the argument is a list with exactly one, two, or three arguments. NameType defaults to longnames and Ticktime to 15000. stop() -> ok | {error, Reason} Types: Reason = not_allowed | not_found Turns a distributed node into a non-distributed node. For other nodes in the network, this is the same as the node going down. Only possible when the net kernel was started using start/1, otherwise {error, not_allowed} is returned. Returns {error, not_found} if the local node is not alive. Referenced By erl(1), global.3erl(3), kernel(6). kernel 8.0.2 Ericsson AB Erlang Module Definition
__label__pos
0.788479
top of page What is an Apex trigger? Updated: Aug 25, 2023 An Apex trigger is a piece of code that executes automatically in response to certain events in Salesforce, such as the insertion, update, or deletion of a record. Triggers are written in Apex, a programming language specifically designed for the Salesforce platform. Triggers can be used to perform a variety of tasks, such as: • Validating data before it is saved in the database • Updating related records when a record is changed • Sending notifications or emails when certain events occur • Performing calculations or other complex processes Using Apex triggers, you can customize the behavior of your Salesforce org and automate processes that would otherwise require manual intervention. How do I create an Apex trigger? To create an Apex trigger, you will need to have a developer account and some familiarity with Apex. Here are the basic steps to create a trigger: 1. Go to the Apex Triggers page in the Setup menu. 2. Click the "New" button to create a new trigger. 3. Select the object that you want the trigger to be associated with (e.g. Account, Contact, Opportunity, etc.). 4. Select the event that will trigger the code execution (e.g. before insert, after update, etc.). 5. Write the Apex code for the trigger in the provided editor. 6. Save the trigger and activate it. It's a good idea to test your trigger thoroughly before deploying it to your production environment. You can use the Apex Developer Console or the Execute Anonymous window to test your trigger and debug any issues. Tips for writing effective Apex triggers • Keep your trigger code as simple and efficient as possible. Triggers can be resource-intensive, and performance can suffer if they are not optimized. • Use helper classes or methods to encapsulate complex logic and make your trigger code more maintainable. • Avoid using SOQL or DML statements inside loops, as this can cause performance issues. • Use Apex best practices, such as proper exception handling and bulkification, to avoid common pitfalls. • Follow the Governor Limits to ensure that your trigger doesn't exceed the maximum limits for CPU time, database operations, and other resources. Conclusion Apex triggers are a powerful tool for automating and customizing your Salesforce org. With some planning and careful coding, you can use triggers to streamline your processes and improve efficiency. 47 views1 comment 1 Comment Ravi Avhad Ravi Avhad Dec 22, 2022 Helpful information. Definately we can cosider the mentioned points while writing triggers. Cheers.... Like bottom of page
__label__pos
0.99122
1. downloads 2. business software 3. news 4. reviews 5. top apps Download TeamSpeak Server 3.0.0 (32-bit) TeamSpeak Server 32-bit 3.0.0 By TeamSpeak Systems GmbH  (Non-Commercial Freeware) User Rating What Is the FileHippo Safety Guarantee? Close We know how important it is to stay safe online so FileHippo is using virus scanning technology provided by Avira to help ensure that all downloads on FileHippo are safe. The program you are about to download is safe to be installed on your device. Advertisement * disable_db_logging set to true by default * serversnapshotdeploy can deploy into a new virtualserver now, to archive this do a "use 0" just before the command * clients table got a new field "client_lastip" where we store last ip used by the client * clientdbinfo, clientdblist, clientdbfind does show client_lastip * fixed clientdblist count parameter returned always 1 * fixed issue where server kept status virtual_online while using logout command * fixed problem where a channel you could not subscribe was not automatically unsubscribed when you left it * fixed issue where VIRTUALSERVER_ASK_FOR_PRIVILEGEKEY && VIRTUALSERVER_AUTOGENERATED_PRIVILEGEKEY where exported/imported through a snapshot * fixed issues with channel permission wont take values from clientchannel permissions * fixed multiple complains by one user possibility if autoban count is set to two * fixed strange serverquery password behavior * improved client traffic reset performance * improved database performance
__label__pos
0.928784
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I'm writing a class in Java to represent a Graph data structure. This is specific to an undirected, unweighted graph and it's purpose is mainly for edge testing (is node A connected to node B, either directly or indirectly). I need help implementing the indirectEdgeTest method. In the code below, I've only commented this method and I'm returning false so the code will compile as-is. I have put some time into coming up with an algorithm, but I can't seem to find anything more simple than this, and I fear I'm making it more complicated than it needs to be: - test first for a direct connection - if no direct connection exists from node a to node b: - for every edge i connected to node a: - create a new graph that does not contain edge a -> i - test new graph for indirect connectivity between nodes i and b Either pseudocode or actual Java code is welcome in your answers. Thanks for your help. Here's the code I have: class Graph { // This is for an undirected, unweighted graph // This implementation uses an adjacency matrix for speed in edge testing private boolean[][] edge; private int numberOfNodes; public Graph(int numNodes) { // The indices of the matrix will not be zero-based, for clarity, // so the size of the array will be increased by 1. edge = new boolean[numNodes + 1][numNodes + 1]; numberOfNodes = numNodes; } public void addEdge(int a, int b) { if (a <= numberOfNodes && a >= 1) { if (b <= numberOfNodes && b >= 1) { edge[a][b] = true; edge[b][a] = true; } } } public void removeEdge(int a, int b) { if (a <= numberOfNodes && a >= 1) { if (b <= numberOfNodes && b >= 1) { edge[a][b] = false; edge[b][a] = false; } } } public boolean directEdgeTest(int a, int b) { // if node a and node b are directly connected, return true boolean result = false; if (a <= numberOfNodes && a >= 1) { if (b <= numberOfNodes && b >= 1) { if (edge[a][b] == true) { result = true; } } } return result; } public boolean indirectEdgeTest(int a, int b) { // if there exists a path from node a to node b, return true // implement indirectEdgeTest algorithm here. return false; } } share|improve this question      Is your graph acyclic? –  Richard Fearn Aug 23 '10 at 20:04      It is not, it may contain cycles. –  dvanaria Aug 23 '10 at 20:19 add comment 3 Answers 3 Your solution will work, but a better solution would be to construct a spanning tree from the root "a" node. This way you will eventually have only one tree to consider, instead of multiple sub-graphs that only are missing particular edges. Once you get the idea, how you implement it is up to you. Assuming you can implement the algorithm in a reasonable manner, you should only have one tree to search for connectivity, which would speed things up considerably. share|improve this answer      While linking to wikipedia is a good idea, linking to the minimum spanning tree protocol as used in communication technology is not, because the discussion of cost, network loops, bridges, etc. has nothing to do with the problem at hand. –  meriton Aug 23 '10 at 20:11 add comment Erm, that approach sounds horribly inefficient. What about this one: void walk(Node orgin, Set<Node> visited) { for (Node n : origin.neighbours) { if (!visited.contains(n)) { visited.add(n); walk(n, visited); } } } boolean hasPath(Node origin, Node target) { Set<Node> reachables = new HashSet<Node>(); walk(origin, reachables); return reachables.contains(target); } Also, using an adjacency matrix is of questionable use for graph traversal, since you can not efficiently iterate over a node's neighbours in a sparse graph. If that method is frequently used, and the graph changes rarely, you can speed queries up by doing the decomposition into connected regions up front, and storing for each node the region it belongs to. Then, two nodes are connected if they belong to the same region. Edit: To clarify on how to best represent the graph. For direct edge testing, an adjacency matrix is preferred. For path testing, a decomposition into regions is. The latter is not trivial to keep current as the graph changes, but there may be algorithms for this in the literature. Alternatively, adjacency lists are serviceable for graph traversal and thus path testing, but they remain less efficient than directly recording the decomposition into connected regions. You can also use adjacency sets to combine the more efficient neighbor iteration in sparse graphs with constant-time edge testing. Keep in mind that you can also store information redundantly, keeping, for each kind of query, a tailored, separate data structure. share|improve this answer      Ok, so your solution is using an adjacency list, correct? Is that the preferred implementation when the main use of a graph data-structure will be direct and indirect edge-testing? –  dvanaria Aug 23 '10 at 20:25 add comment up vote 0 down vote accepted I credit meriton for his or her answer, but I've coded the idea into working Java classes and a unit test, so I'm supplying a separate answer here in case anyone is looking for reusable code. Thanks meriton. I agree it's important to make a distinction between direct edge testing and path testing, and that there are different implementations of graphs that are better suited to a particular type of testing. In the case of path testing, it seems adjacency lists are much more efficient than an adjacency matrix representation. My code below is probably not as efficient as it could be, but for now it is solving my problem. If anyone has improvements to suggest, please feel free. To compile: javac Graph.java To execute: java GraphTest class Graph { private java.util.ArrayList<Node> nodeList; private int numberOfNodes; public Graph(int size) { nodeList = new java.util.ArrayList<Node>(size + 1); numberOfNodes = size; for (int i = 0; i <= numberOfNodes; i++) { nodeList.add(new Node()); } } public void addEdge(int a, int b) { if (a >= 1 && a <= numberOfNodes) { if (b >= 1 && b <= numberOfNodes) { nodeList.get(a).addNeighbour(nodeList.get(b)); nodeList.get(b).addNeighbour(nodeList.get(a)); } } } public void walk(Node origin, java.util.Set<Node> visited) { for (Node n : origin.getNeighbours()) { if (!visited.contains(n)) { visited.add(n); walk(n, visited); } } } public boolean hasPath(Node origin, Node target) { java.util.Set<Node> reachables = new java.util.HashSet<Node>(); walk(origin, reachables); return reachables.contains(target); } public boolean hasPath(int a, int b) { java.util.Set<Node> reachables = new java.util.HashSet<Node>(); Node origin = nodeList.get(a); Node target = nodeList.get(b); walk(origin, reachables); return reachables.contains(target); } } class Node { private java.util.Set<Node> neighbours; public Node() { neighbours = new java.util.HashSet<Node>(); } public void addNeighbour(Node n) { neighbours.add(n); } public java.util.Set<Node> getNeighbours() { return neighbours; } } class GraphTest { private static Graph g; public static void main(String[] args) { g = new Graph(6); g.addEdge(1,5); g.addEdge(4,1); g.addEdge(4,3); g.addEdge(3,6); printTest(1, 2); printTest(1, 4); printTest(6, 1); } public static void printTest(int a, int b) { System.out.print("Are nodes " + a + " and " + b + " connected?"); if (g.hasPath(a, b)) { System.out.println(" YES."); } else { System.out.println(" NO."); } } } share|improve this answer add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.997845
u010353960 潘春伟 2016-03-10 02:09 采纳率: 0% 浏览 1.8k 关于JAVA编程思想中synchronized和ReentrantLock 在学习JAVA编程思想并发一章时,关于synchronized和ReentrantLock的例子运行 结果与书中不一致。代码如下: package thinking.in.java.chapter21; class Pair { private int x,y; public Pair(int x, int y) { this.x = x; this.y = y; } public Pair() { this(0,0); } public int getX(){ return x; } public int getY(){ return y; } public void incrementX(){x++;} public void incrementY(){y++;} @Override public String toString() { return "x=" + x + ", y=" + y; } public class PairValuesNotEqualException extends RuntimeException { public PairValuesNotEqualException() { super("Pair values not equal: " + Pair.this); } } public void checkstate(){ if(x != y) throw new PairValuesNotEqualException(); } } package thinking.in.java.chapter21; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; public abstract class PairManager { AtomicInteger checkCounter = new AtomicInteger(0); protected Pair p = new Pair(); private List<Pair> storage = Collections.synchronizedList(new ArrayList<Pair>()); public synchronized Pair getPair(){ return new Pair(p.getX(),p.getY()); } protected void store(Pair p){ storage.add(p); try{ TimeUnit.MILLISECONDS.sleep(50); }catch(InterruptedException e){} } public abstract void increment(); } package thinking.in.java.chapter21; public class PairChecker implements Runnable { private PairManager pm; public PairChecker(PairManager pm) { this.pm = pm; } @Override public void run() { while(true){ pm.checkCounter.incrementAndGet(); pm.getPair().checkstate(); } } } package thinking.in.java.chapter21; public class PairManager1 extends PairManager { @Override public synchronized void increment() { p.incrementX(); p.incrementY(); store(getPair()); } } package thinking.in.java.chapter21; public class PairManager2 extends PairManager { @Override public void increment() { Pair temp; synchronized (this) { p.incrementX(); p.incrementY(); temp = getPair(); } store(temp); } } package thinking.in.java.chapter21; public class PairManipulator implements Runnable { private PairManager pm; public PairManipulator(PairManager pm) { this.pm = pm; } @Override public void run() { while(true) pm.increment(); } @Override public String toString() { return "Pair: " + pm.getPair() + " checkerCounter = " + pm.checkCounter.get(); } } package thinking.in.java.chapter21; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; public class CriticalSection { static void testApproaches(PairManager pman1, PairManager pman2){ ExecutorService exec = Executors.newCachedThreadPool(); PairManipulator pm1 = new PairManipulator(pman1), pm2 = new PairManipulator(pman2); PairChecker pchecker1 = new PairChecker(pman1), pchecker2 = new PairChecker(pman2); exec.execute(pm1); exec.execute(pm2); exec.execute(pchecker1); exec.execute(pchecker2); try{ TimeUnit.MILLISECONDS.sleep(1000); }catch(InterruptedException e){ System.err.println("Sleep interrupted"); } System.out.println("pm1: " + pm1 + "\npm2: " + pm2); System.exit(0); } public static void main(String[] args) { PairManager pman1 = new PairManager1(), pman2 = new PairManager2(); testApproaches(pman1, pman2); } } package thinking.in.java.chapter21; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; public class ExplicitPairManager1 extends PairManager { private Lock lock = new ReentrantLock(); @Override public void increment() { lock.lock(); try{ p.incrementX(); p.incrementY(); store(getPair()); }finally{ lock.unlock(); } } } package thinking.in.java.chapter21; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; public class ExplicitPairManager2 extends PairManager { private Lock lock = new ReentrantLock(); @Override public void increment() { lock.lock(); Pair temp; try{ p.incrementX(); p.incrementY(); temp = getPair(); }finally{ lock.unlock(); } store(temp); } } package thinking.in.java.chapter21; public class ExolicitCritialSection { public static void main(String[] args) { PairManager pman1 = new ExplicitPairManager1(), pman2 = new ExplicitPairManager2(); CriticalSection.testApproaches(pman2, pman1); } } 按照书中所说,ExolicitCritialSection和CriticalSection都不应该抛出PairValuesNotEqualException异常,可是ExolicitCritialSection中的两个线程都抛出了该异常, 个人分析在进行increment的时候,虽然是同步操作,可是在进行PairChecker时checkstate方法并未同步,他可以访问Pair对象中的X和Y值,所以按照书中的列子感觉都应该抛出PairValuesNotEqualException异常才对,为什么只有ExolicitCritialSection会抛出那???? • 点赞 • 写回答 • 关注问题 • 收藏 • 邀请回答 2条回答 默认 最新 • u010353960 潘春伟 2016-03-10 02:13 求并发高手解答啊!! 点赞 评论 • u010353960 潘春伟 2016-03-10 02:30 没人能解答吗?还是代码太多了啊 点赞 评论 相关推荐
__label__pos
0.996029
Campaign Management API What is the MOLOCO Cloud Campaign Management API MOLOCO Cloud Campaign Management API provides customers with the comprehensive set of APIs for creating and managing MOLOCO Cloud campaigns. It provides Create, Read, Update, Delete and List operations for the following entities: • AdAccount • Product (App) and TrackingLink • Campaign and AdGroup • CreativeGroup and Creative • CustomTarget • CustomerSet Essential entities for launching a campaign In order to launch a new campaign, you need to prepare several entities that the campaign depends on. Please see the entity relationship diagram below. If the entity you need is already there, you can reuse it. Otherwise, create a new one. Once you have AdAccount, Product (App), TrackingLink, and CreateGroup entities ready, you can create a new campaign entity. 465 Recommended entity preparation order: 1. Create an AdAccount entity to which the campaign will belong or reuse an existing one. 2. Create a Product entity that you want to advertise with the new campaign. If you want to advertise an existing product, you can reuse it. 3. Create or reuse a TrackingLink entity for the Product. 4. Create one or more Creative entities to expose to the media through the campaign. Each creative asset must be uploaded to public storage before creating a Creative. For user convenience, the feature to upload an asset and create a creative entity at once will be supported soon. 5. Create one or more CreativeGroup entities to be used with the campaign. Each CreativeGroup entity must refer to one or more Creative entities. You can also reuse existing CreativeGroup and Creative entities. 6. Create a new Campaign entity and launch it. Usage example: From scratch to a campaign 1. Creating an AdAccount Create a new AdAccount entity, the first entity in the order recommended above. For a detailed description of AdAccount entity creation, see Create a new ad account. $ cat ad_account.json { "title": "Ad Account For Tutorial", "description": "Ad Account For Tutorial", "timezone": "America/Los_Angeles", "currency": "USD" } $ curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d "@ad_account.json" \ "https://api.moloco.cloud/cm/v1/ad-accounts" { "ad_account": { "id": "$AD_ACCOUNT_ID", "title": "Ad Account For Tutorial", "description": "Ad Account For Tutorial", "timezone": "America/Los_Angeles", "currency": "USD", "created_at": "2020-07-01T00:00:17.317689Z", "updated_at": "2020-07-01T00:00:17.317689Z" } } In this example, your parameters for a new AdAccount creation request are in the ad_account.json file and delivered to the Campaign Management API as an HTTP POST request body. In the request response, you can find various information such as the AdAccount ID ($AD_ACCOUNT_ID) assigned to the newly created AdAccount, as well as the fields that were included in the request. About the $TOKEN, please see the "Getting Started with Cloud API". In the "Prerequisite" section, you can find the information to issue a $TOKEN for the MOLOCO Cloud authentication. 2. Creating a Product (App) Next you will need to create a new Product entity. At this stage, you need the AdAccount ID that was created in the previous step. In MOLOCO Cloud Campaign Management API, App and Web are managed as an entity called Product. For a detailed description of Product entity creation, see Create a new product. $ cat product.json { "title": "Tutorial App", "advertiser_domain": "molocoads.com", "device_os": "ANDROID", "type": "APP", "app": { "bundle_id": "com.moloco.ads", "category": "IAB9-1", "rating": 5, "app_store_url": "https://play.google.com/store/apps/details?id=com.moloco.ads", "postback_integration": { "mmp": "TRACKFNT", "bundle_id": "com.moloco.ads" } } } $ curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d "@product.json" \ "https://api.moloco.cloud/cm/v1/products?ad_account_id=$AD_ACCOUNT_ID" { "product": { "ad_account_id": "$AD_ACCOUNT_ID", "id": "$PRODUCT_ID", "title": "Tutorial App", "advertiser_domain": "molocoads.com", "device_os": "ANDROID", "type": "APP", "app": { "bundle_id": "com.moloco.ads", "category": "IAB9-1", "rating": 5.0, "app_store_url": "https://play.google.com/store/apps/details?id=com.moloco.ads", "tracking_company": "ADJUST", "tracking_bundle_id": "com.moloco.ads", "postback_integration": { "mmp": "TRACKFNT", "bundle_id": "com.moloco.ads" }, "mmp_integration": { "id": "mAZE5cU0tZl1uIA5", "mmp_bundle_id": "com.moloco.ads", "mmp": "ADJUST", "status": "UNDER_REVIEW", "updated_at": "2020-07-01T15:29:45.457575Z" } }, "created_at": "2020-07-01T15:29:45.732976Z", "updated_at": "2020-07-01T15:29:45.732976Z" } } In the above example response body, you can find the Product ID ($PRODUCT_ID), which will be used by the other entities below. 3. Creating a TrackingLink Next, create a TrackingLink entity. TrackingLink is an entity that contains the Tracking Company, CT(Click Through) Link, and VT(View Through) Link. CT Link is a url for notifying the Tracking Company when a click event occurs, and VT Link is for notifying when an impression event occurs. To create a TrackingLink entity, you need to know which Tracking Company your app is using, and you will also need CT Link and VT Link information issued by the Tracking Company. For a detailed description of TrackingLink entity creation, see Create a new tracking link. $ cat tracking_link.json { "title": "Tutorial App Tracking Link", "description": "Tutorial App Tracking Link", "device_os": "ANDROID", "click_through_link": { "url": "https://tracker-us.adsmoloco.com/com.trackfnt.adNetworkTest?advertising_id=%{a_idfa}&mt_ad=%{creative_filename}&mt_ad_id=%{creative_id}&mt_ad_type=%{creative_type}&mt_click_lookback=7d&mt_cost_currency=%{cost_currency}&mt_cost_model=%{cost_type}&mt_cost_value=%{cost_amount}&mt_siteid=%{raw_app_bundle}&mt_sub4=%{source}&c=%{campaign}&clickid=%{bid_id}&pid=moloco_int" }, "view_through_link": { "url": "https://tracker-us.adsmoloco.com/com.trackfnt.adNetworkTest?advertising_id=%{a_idfa}&mt_ad=%{creative_filename}&mt_ad_id=%{creative_id}&mt_ad_type=%{creative_type}&mt_cost_currency=%{cost_currency}&mt_cost_model=%{cost_type}&mt_cost_value=%{cost_amount}&mt_siteid=%{raw_app_bundle}&mt_sub4=%{source}&mt_viewthrough_lookback=1d&c=%{campaign}&clickid=%{bid_id}&pid=moloco_int" }, "tracking_company": "TRACKFNT" } $ curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d "@tracking_link.json" \ "https://api.moloco.cloud/cm/v1/tracking-links?ad_account_id=$AD_ACCOUNT_ID&product_id=$PRODUCT_ID" { "tracking_link": { "id": "$TRACKING_LINK_ID", "ad_account_id": "$AD_ACCOUNT_ID", "product_id": "$PRODUCT_ID", "title": "Tutorial App Tracking Link", "description": "Tutorial App Tracking Link", "device_os": "ANDROID", "click_through_link": { "url": "https://tracker-us.adsmoloco.com/com.trackfnt.adNetworkTest?advertising_id=%{a_idfa}&ml_ad=%{creative_filename}&ml_ad_id=%{creative_id}&ml_ad_type=%{creative_type}&ml_click_lookback=7d&ml_cost_currency=%{cost_currency}&ml_cost_model=%{cost_type}&ml_cost_value=%{cost_amount}&ml_siteid=%{raw_app_bundle}&ml_sub4=%{source}&c=%{campaign}&clickid=%{bid_id}&pid=moloco_int", "status": "UNVERIFIED", "unverified_status_data": { "stored_at": "2020-07-01T09:50:37Z" } }, "view_through_link": { "url": "https://tracker-us.adsmoloco.com/com.trackfnt.adNetworkTest?advertising_id=%{a_idfa}&ml_ad=%{creative_filename}&ml_ad_id=%{creative_id}&ml_ad_type=%{creative_type}&ml_cost_currency=%{cost_currency}&ml_cost_model=%{cost_type}&ml_cost_value=%{cost_amount}&ml_siteid=%{raw_app_bundle}&ml_sub4=%{source}&ml_viewthrough_lookback=1d&c=%{campaign}&clickid=%{bid_id}&pid=moloco_int", "status": "UNVERIFIED", "unverified_status_data": { "stored_at": "2020-07-01T09:50:37Z" } }, "tracking_company": "TRACKFNT", "created_at": "2020-07-01T09:50:37.924722Z", "updated_at": "2020-07-01T09:50:37.924722Z" } } In the above example response body, you can find the TrackingLink ID ($TRACKING_LINK_ID), which will be used by the CreativeGroup entities. 4. Creating a Creative Each campaign must have one or more CreativeGroup entities to advertise its app. And each CreativeGroup entity may have multiple Creative entities. It's better to create Creative entities first before creating a CreativeGroup entity. 4-1. Uploading a Creative asset to the storage using MOLOCO Cloud API In order to create Creative entity, you should upload the file first. As we are not providing creative asset compression, to reduce the size of your assets, please compress them before uploading. To upload a creative asset to a storage, 1) you should get an upload URL for your asset and 2) initiate a resumable upload session by making a call to the upload URL with a POST request, 3) next you can upload the creative asset by making another call to the same URL with a PUT request. For a detailed description of asset upload session creation, see Create a new asset upload session. $ cat asset.json { "asset_kind": "CREATIVE", "mime_type": "image/jpeg", "file_name": "test_image.jpg" } $ curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d "@asset.json" \ "https://api.moloco.cloud/cm/v1/creative-assets?ad_account_id=$AD_ACCOUNT_ID" { "asset_url": "$CREATIVE_ASSET_URL", "content_upload_url": "$UPLOAD_URL" } You can find the upload URL ($UPLOAD_URL) in the "content_upload_url" field. Now upload the creative asset to $UPLOAD_URL. MOLOCO Cloud provides Google Cloud Storage for Creative assets, and $UPLOAD_URL points to the URL for Resumable uploads. Initiate a resumable upload session as follows: $ curl -v -X POST \ -H "x-goog-resumable: start" \ -H "Content-Type: image/jpeg" \ "$UPLOAD_URL" 2>&1 \ | grep "x-guploader-uploadid" < x-guploader-uploadid: $UPLOAD_ID Please check out the upload id ($UPLOAD_ID) at the x-guploader-uploadid field of the response. Using the the upload id, you can upload the asset file to the storage by making a call to the upload URL with a PUT request. But please append a query parameter "upload_id=$UPLOAD_ID" at the end of the URL. $ASSET_PATH is the path where the creative asset to upload is in the following request: $ curl -X PUT \ --data-binary "@$ASSET_PATH" \ -H "Content-Type: image/jpeg" \ "$UPLOAD_URL&upload_id=$UPLOAD_ID" Now your creative asset has been uploaded to the storage. You can access it at the URL $CREATIVE_ASSET_URL. Please use this URL in the following step. 4-2. Creating a Creative If your creative asset is uploaded to a storage and has an accessible URL, create a Creative entity like the following. Let's say your asset's URL is $CREATIVE_ASSET_URL. For a detailed description of Creative entity creation, see Create a new creative. $ cat creative.json { "title": "Test Image Creative", "type": "IMAGE", "original_filename": "test_image.jpg", "size_in_bytes": 94994, "image": { "image_url": "$CREATIVE_ASSET_URL", "width": 640, "height": 960, "size_in_bytes": 94994 } } $ curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d "@creative.json" \ "https://api.moloco.cloud/cm/v1/creatives?ad_account_id=$AD_ACCOUNT_ID&product_id=$PRODUCT_ID" { "creative": { "id": "$CREATIVE_ID", "ad_account_id": "$AD_ACCOUNT_ID", "title": "Test Image Creative", "enabling_state": "ENABLED", "ad_account_info": { "id": "$AD_ACCOUNT_ID", "title": "Ad Account For Tutorial", "description": "Ad Account For Tutorial" }, "type": "IMAGE", "original_filename": "test_image.jpg", "size_in_bytes": 94994, "image": { "image_url": "$CREATIVE_ASSET_URL", "width": 640, "height": 960, "size_in_bytes": 94994 }, "created_at": "2020-07-01T14:54:00.526009Z", "updated_at": "2020-07-01T14:54:00.526009Z" } } In the above example response body, you can find the Creative ID ($CREATIVE_ID), which is required by the CreativeGroup entity. 5. Creating a CreativeGroup CreativeGroup entities group Creative entities that have a similar appearance to help users easily reference them in campaigns. Create a CreativeGroup entity that references one or more of the pre-created Creative entities. Note that the CreativeGroup entity also refers to the TrackingLink entity. For a detailed description of CreativeGroup entity creation, see Create a new creative group. $ cat creative_group.json { "ad_account_id": "$AD_ACCOUNT_ID", "product_id": "$PRODUCT_ID", "title": "Test CreativeGroup", "description": "Test CreativeGroup", "enabling_state": "ENABLED", "status": "APPROVED", "tracking_link_id": "$TRACKING_LINK_ID", "creative_ids": [ "$CREATIVE_ID" ] } $ curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d "@creative_group.json" \ "https://api.moloco.cloud/cm/v1/creative-groups?ad_account_id=$AD_ACCOUNT_ID&product_id=$PRODUCT_ID" { "creative_group": { "id": "tla69ad6dz2JAUAY", "ad_account_id": "$AD_ACCOUNT_ID", "product_id": "$PRODUCT_ID", "title": "Test CreativeGroup", "description": "Test CreativeGroup", "enabling_state": "ENABLED", "creative_ids": [ "$CREATIVE_ID" ], "status": "APPROVED", "action_history": [ { "id": "05fb3aa5-44b9-4943-77c4-90f5f15a31c7", "action_type": "CREATE", "created_at": "2020-07-01T15:21:05.797660752Z" } ], "tracking_link_id": "$TRACKING_LINK_ID", "audience": {}, "created_at": "2020-07-01T15:21:05.909989Z", "updated_at": "2020-07-01T15:21:05.909989Z" }, "creatives": [ { "id": "$CREATIVE_ID", "ad_account_id": "$AD_ACCOUNT_ID", "title": "Test Image Creative", "enabling_state": "ENABLED", "ad_account_info": { "id": "$AD_ACCOUNT_ID", "title": "Ad Account For Tutorial", "description": "Ad Account For Tutorial" }, "type": "IMAGE", "original_filename": "test_image.jpg", "size_in_bytes": 94994, "image": { "image_url": "$CREATIVE_ASSET_URL", "width": 640, "height": 960, "size_in_bytes": 94994 }, "created_at": "2020-07-01T14:54:00.526009Z", "updated_at": "2020-07-01T14:54:00.526009Z" } ], "tracking_link": { "id": "$TRACKING_LINK_ID", "ad_account_id": "$AD_ACCOUNT_ID", "product_id": "$PRODUCT_ID", "title": "Tutorial App Tracking Link", "description": "Tutorial App Tracking Link", "device_os": "ANDROID", "click_through_link": { "url": "https://tracker-us.adsmoloco.com/com.trackfnt.adNetworkTest?advertising_id=%{a_idfa}&ml_ad=%{creative_filename}&ml_ad_id=%{creative_id}&ml_ad_type=%{creative_type}&ml_click_lookback=7d&ml_cost_currency=%{cost_currency}&ml_cost_model=%{cost_type}&ml_cost_value=%{cost_amount}&ml_siteid=%{raw_app_bundle}&ml_sub4=%{source}&c=%{campaign}&clickid=%{bid_id}&pid=moloco_int", "status": "UNVERIFIED", "unverified_status_data": { "stored_at": "2020-07-01T09:50:37Z" } }, "view_through_link": { "url": "https://tracker-us.adsmoloco.com/com.trackfnt.adNetworkTest?advertising_id=%{a_idfa}&ml_ad=%{creative_filename}&ml_ad_id=%{creative_id}&ml_ad_type=%{creative_type}&ml_cost_currency=%{cost_currency}&ml_cost_model=%{cost_type}&ml_cost_value=%{cost_amount}&ml_siteid=%{raw_app_bundle}&ml_sub4=%{source}&ml_viewthrough_lookback=1d&c=%{campaign}&clickid=%{bid_id}&pid=moloco_int", "status": "UNVERIFIED", "unverified_status_data": { "stored_at": "2020-07-01T09:50:37Z" } }, "tracking_company": "TRACKFNT", "created_at": "2020-07-01T09:50:37.924722Z", "updated_at": "2020-07-01T09:50:37.924722Z" } } In the above example response body, you can find the CreativeGroup ID ($CREATIVE_GROUP_ID), which is required by the following entities. Also, for the user's convenience, the contents of Creative and TrackingLink referenced by CreativeGroup are embedded. 6. Creating a Campaign Now, it's time to create a Campaign entity using the entities created above. For a detailed description of Campaign entity creation, see Create a new campaign. $ cat campaign.json { "campaign": { "ad_account_id": "$AD_ACCOUNT_ID", "product_id": "$PRODUCT_ID", "title": "Test Campaign", "description": "Test Campaign", "enabling_state": "ENABLED", "type": "APP_USER_ACQUISITION", "device_os": "ANDROID", "country": "KOR", "currency": "USD", "schedule": { "start": "2020-07-01T07:00:00Z" }, "budget_schedule": { "daily_budget": { "daily_budget": { "currency": "USD", "amount_micros": "1000000" } } }, "tracking_company": "TRACKFNT", "goal": { "type": "OPTIMIZE_CPI_FOR_APP_UA", "optimize_app_installs": { "mode": "TARGET_CPI_CENTRIC", "target_cpi": { "currency": "USD", "amount_micros": "1000000" } } } }, "ad_group": { "title": "Test AdGroup", "enabling_state": "ENABLED", "audience": {}, "capper": { "imp_interval": { "amount": "12", "unit": "HOUR" } }, "creative_group_ids": [ "$CREATIVE_GROUP_ID" ] } } $ curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d "@campaign.json" \ "https://api.moloco.cloud/cm/v1/campaigns?ad_account_id=$AD_ACCOUNT_ID&product_id=$PRODUCT_ID" { "campaign": { "id": "$CAMPAIGN_ID", "ad_account_id": "$AD_ACCOUNT_ID", "product_id": "$PRODUCT_ID", "title": "Test Campaign", "description": "Test Campaign", "enabling_state": "ENABLED", "type": "APP_USER_ACQUISITION", "device_os": "ANDROID", "state": "SUBMITTED", "country": "KOR", "currency": "USD", "schedule": { "start": "2020-07-01T07:00:00Z" }, "budget_schedule": { "daily_budget": { "daily_budget": { "currency": "USD", "amount_micros": "1000000" } } }, "audience": { "shared_audience_target_ids": [], "targeting_condition": { "allowed_exchanges": [ "ADX", "MOPUB", "UNITY" ] } }, "tracking_company": "TRACKFNT", "goal": { "type": "OPTIMIZE_APP_INSTALLS", "optimize_app_installs": { "mode": "TARGET_CPI_CENTRIC", "target_cpi": { "currency": "USD", "amount_micros": "1000000" } } }, "created_at": "2020-07-01T15:34:43.323228Z", "updated_at": "2020-07-01T15:34:43.323228Z" } } Right after creating a Campaign entity, the campaign's state is "SUBMITTED". After a certain period of time (approximately 2 hours), when the ML Model for the campaign is ready, the state changes to "ACTIVE", which indicates that the campaign can participate in bidding. Finally, you are ready to launch a new campaign. Once the campaign's state became "ACTIVE" automatically, the creatives you set up will be live via MOLOCO Cloud. If you want to evaluate the performance after the campaign has been launched, you can check out the details through the Report API and Log API. Usage example: Create a CustomerSet 1. Creating a CustomerSet CustomerSet is an entity representing a set of customer IDFAs. This can be used to target or detarget specific customer groups. To create a CustomerSet, you need to upload a csv file containing customer IDFAs, which is basically similar to the Creative creation process. Please notice that you need a url for your CustomerSet asset, and it should be uploaded to the Google Cloud Storage. Unlike creating Creatives, CustomerSet assets must be uploaded to the GCS, so 1-1 cannot be skipped. 1-1. Uploading a CustomerSet asset to the storage using MOLOCO Cloud API To upload a CustomerSet asset to a storage, 1) you should get an upload URL for your asset and 2) initiate a resumable upload session by making a call to the upload URL with a POST request, 3) next you can upload the CustomerSet asset by making another call to the same URL with a PUT request. For a detailed description of asset upload session creation, see Create a new asset upload session. $ cat asset.json { "asset_kind": "CSV", "mime_type": "text/csv", "file_name": "test_customer_set.csv" } $ curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d "@asset.json" \ "https://api.moloco.cloud/cm/v1/creative-assets?ad_account_id=$AD_ACCOUNT_ID" { "asset_url": "$CUSTOMER_SET_ASSET_URL", "content_upload_url": "$UPLOAD_URL" } You can find the upload URL ($UPLOAD_URL) in the "content_upload_url" field. Now upload the CustomerSet asset to $UPLOAD_URL. MOLOCO Cloud provides Google Cloud Storage for CustomerSet assets, and $UPLOAD_URL points to the URL for Resumable uploads. Initiate a resumable upload session as follows: $ curl -v -X POST \ -H "x-goog-resumable: start" \ -H "Content-Type: text/csv" \ "$UPLOAD_URL" 2>&1 \ | grep "x-guploader-uploadid" < x-guploader-uploadid: $UPLOAD_ID Please check out the upload id ($UPLOAD_ID) at the x-guploader-uploadid field of the response. Using the the upload id, you can upload the asset file to the storage by making a call to the upload URL with a PUT request. But please append a query parameter "upload_id=$UPLOAD_ID" at the end of the URL. $ASSET_PATH is the path where the CustomerSet asset to be uploaded in the following request is located: $ curl -i -X PUT \ --data-binary "@$ASSET_PATH" \ -H "Content-Type: text/csv" \ -H "Connection: keep-alive" \ "$UPLOAD_URL&upload_id=$UPLOAD_ID" If you see HTTP 200 OK or 201 in the response message, it indicates that the upload is completed. Now your CustomerSet asset has been uploaded to the storage. You can access it at the URL $CUSTOMER_SET_ASSET_URL. Please use this URL in the following step. 📘 Checking the integrity of the uploaded file If you want to check the integrity of the uploaded file, use the following request: $ curl -i -X PUT \ -H "Content-Length: 0" \ -H "Content-Range: bytes */*" \ "$UPLOAD_URL&upload_id=$UPLOAD_ID" 2>&1 \ | grep "x-goog-hash\|HTTP\|x-goog-stored-content-encoding" HTTP/2 200 x-goog-hash: crc32c=$GOOG_CRC32C x-goog-hash: md5=$GOOG_HASH x-goog-stored-content-encoding: identity If your local machine has gsutil installed, then you can verify integrity by comparing $GOOG_HASH and $FILE_HASH. $ gsutil hash -m $ASSET_PATH Hashes [base64] for $ASSET_PATH: Hash (md5): $FILE_HASH 1-2. Creating a CustomerSet If your CustomerSet asset is uploaded to the GCS and has an accessible URL, create a CustomerSet entity like the following. Let's say your asset's URL is $CUSTOMER_SET_ASSET_URL. You can set GOOGLE_ADID or APPLE_IDFA in the id_type field, which means Google's ADID and Apple's IDFA, respectively. You cannot mix ADID and IDFA in one CustomerSet. For a detailed description of CustomerSet entity creation, see Create a new customer set. $ cat customer_set.json { "title": "Test CustomerSet", "description": "Test CustomerSet", "id_type": "GOOGLE_ADID", "data_file_path": "$CUSTOMER_SET_ASSET_URL" } $ curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d "@customer_set.json" \ "https://api.moloco.cloud/cm/v1/customer-sets?ad_account_id=$AD_ACCOUNT_ID" { "customer_set": { "id": "$CUSTOMER_SET_ID", "title": "Test CustomerSet", "description": "Test CustomerSet", "id_type": "GOOGLE_ADID", "status": "PREPARING", "data_file_path": "$CUSTOMER_SET_ASSET_URL", "created_at": "2020-07-01T15:34:43.323228Z", "updated_at": "2020-07-01T15:34:43.323228Z" } } In the above example response body, you can find the CustomerSet ID ($CUSTOMER_SET_ID), which can be used by AudienceTarget. See Create a new audience target.
__label__pos
0.984376
阅读 2670 【如果不想读文档的话】傻瓜式入门vue3 今天看完说唱新世代到了12点,一看反正时间已经这么晚了就再熬会写了篇文档,后面有些困了...请多多担待 少说废话,直接开整 可任选一种创建方式 我个人还是采用的脚手架 升级(即重装) npm install -g @vue/cli # OR yarn global add @vue/cli vue create hello-vue3 复制代码 下面和原来不同的就是多了个选择版本的,选择vue3上手即可 一、重点 先以下面官网所列出的显著的为主(后三个先忽略) 1.1. composition API 别不管 Options API 与composition API 比较,我们先直接上手composition API composition API 入口点setup函数 创建响应式数据 refreactive ref ref函数接一个参数返回一个响应式的ref对象 直接看栗子 <template> <div> {{num}} </div> </template> <script> import { ref } from "vue"; export default { setup() { const num = ref(1); return { num }; }, }; </script> 复制代码 也即和vue2.x中的这种写法是一样的 data(){ return { num:1 } } 复制代码 值得注意的是:ref返回的是一个对象,需要通过它的value属性拿到它的值 也即num.value。不过在模板中使用时他会自动解。也就是上面栗子中模板直接使用num,还有一种可自动解的情况先看下面 另外现在就可以理解ref是用来把简单类型的数据进行响应式化的 reactive ref负责简单数据数据,则reactive就是将引用类型的数据进行响应式化 直接看栗子 <template> <div>{{num}}{{obj.name}}</div> </template> <script> import { ref, reactive } from "vue"; export default { setup() { const num = ref(1); const obj = reactive({ name: "gxb", age: 18, }); return { num, obj }; }, }; </script> 复制代码 上面的ref对象自动解套的另一种情况也即在这里它作为reactive函数参数对象的一属性 即: const num = ref(1); const obj = reactive({ name: "gxb", age: 18, num }); 复制代码 值得注意的是:这里不要随意使用...语法或者解构,否则会丧失响应式性质,最后的工具函数那再写栗子介绍 readonly 这个函数,参数可以一个响应式或者普通的对象再或者是个ref。返回的是一个只读代理(深层的) 栗子 computed 与 watch computed computed是一个函数,它需要传一个getter函数。其返回值是一个不可手动修改的ref对象 栗子 <template> <div>{{num2}}</div> </template> <script> import { ref, reactive, computed } from "vue"; export default { setup() { const num = ref(1); const obj = reactive({ name: "gxb", age: 18, num, }); const num2 = computed(() => num.value + 1); return { num, obj, num2 }; }, }; </script> 复制代码 注意不可修改 const num2=computed(()=>num.value+1) num2.value++ 复制代码 想要一个可以进行修改的,就需传一个具有get和set函数的对象 <template> <div>{{num2.value}}</div> </template> <script> import { ref, reactive, computed } from "vue"; export default { setup() { const num = ref(1); const obj = reactive({ name: "gxb", age: 18, num, }); const num2 = computed({ get:()=>num, set:value=>num.value=value }); num2.value=3 return { num, obj, num2 }; }, }; </script> 复制代码 要注意的是,此时num2这个ref对象不会在自动解套了 watch 监听一个 <template> <div>{{num2.value}}</div> </template> <script> import { ref, reactive, computed, watch } from "vue"; export default { setup() { const obj = reactive({ name: "gxb", age: 18, num, }); watch( () => obj.name, (name, preName) => { console.log(`new ${name}---old ${preName}`); } ); setTimeout(() => { obj.name = "zhangsan"; }, 1000); return { obj }; }, }; </script> 复制代码 第一参除了可以是上面那种有返回值的getter函数,也可以是一个ref对象 <template> <div></div> </template> <script> import { ref, reactive, computed, watch } from "vue"; export default { setup() { const obj = reactive({ name: "gxb", age: 18, num, }); const num = ref(0); watch(num, (name, preName) => { console.log(`new ${name}---old ${preName}`); }); setTimeout(() => { num.value = 2; }, 1000); return { obj }; }, }; </script> 复制代码 监听多个 即如下只要num或obj.name有一个发生变动就触发监听器处理回调 <template> <div></div> </template> <script> import { ref, reactive, computed, watch } from "vue"; export default { setup() { const obj = reactive({ name: "gxb", age: 18, num, }); const num = ref(0); watch([num, ()=>obj.name], ([newNum, newName], [oldNum, oldName]) => { console.log(`new ${(newNum)},${(newName)}---old ${(oldNum)},${oldName}`); }); setTimeout(() => { num.value = 6; // obj.name = "zhangsan"; }, 1000); return { obj }; }, }; </script> 复制代码 生命周期钩子 栗子 import { onMounted, onUpdated, onUnmounted } from 'vue' const MyComponent = { setup() { onMounted(() => { console.log('mounted!') }) onUpdated(() => { console.log('updated!') }) onUnmounted(() => { console.log('unmounted!') }) }, } 复制代码 对应2.x钩子 props和this props setup这个入口函数接收的第一个参数就是props 栗子 这里需要注意,不要随便进行解构 即图省事 props: { data: String, }, setup({ data }) { console.log(data); } 复制代码 解构会使其丧失响应式的 this 2.x中拿组件实例实例很好拿,一般就是直接this,但是setup则不同。 但是组件实例上有许多api我们还是要使用的 故setup的第二个参数是一个上下文对象 栗子:派发一个自定义事件 值得注意的是这个context只是选择性的暴露了一些属性,如上面的emit还有attrs、slots 依赖注入与Refs 依赖注入 和vue2.x的provide和inject基本一样 栗子 为图简单我用的一个组件举例 它们的响应式需要自己出来一下(如用ref) Refs 如拿下面这个节点 <template> <div ref="test">test</div> </template> <script> import { ref, reactive, computed, watch, provide, inject, onMounted, } from "vue"; export default { setup() { const test = ref(null); onMounted(() => { console.log(test.value); }); return { test }; }, }; </script> 复制代码 一些工具函数 先来写下破坏reactive生成的响应对象代理的栗子 首先正常写法 <template> <div ref="test"> {{obj.age}} <button @click="obj.age++">add</button> </div> </template> <script> import { ref, reactive, computed, watch, provide, inject, readonly } from "vue"; export default { props: { data: String, }, setup(props, context) { console.log(props.data); context.emit("test"); const obj = reactive({ name: "gxb", age: 18, }); return { obj }; }, }; </script> 复制代码 使用扩展语法 <template> <div ref="test"> {{age}} <button @click="age++">add</button> </div> </template> <script> import { ref, reactive, computed, watch, provide, inject, readonly } from "vue"; export default { props: { data: String, }, setup(props, context) { console.log(props.data); context.emit("test"); const obj = reactive({ name: "gxb", age: 18, }); return { ...obj }; }, }; </script> 复制代码 解构出来的同样不行 <template> <div ref="test"> {{age}} <button @click="age++">add</button> </div> </template> <script> import { ref, reactive, computed, watch, provide, inject, readonly } from "vue"; export default { props: { data: String, }, setup(props, context) { console.log(props.data); context.emit("test"); const obj = reactive({ name: "gxb", age: 18, }); const { age } = obj; return { age }; }, }; </script> 复制代码 这个的原理也很简单,reactive的内部原理是Proxy,它操作均在返回的代理实例上 下面开始介绍几个工具函数 1. unref,参数是一个ref则返回这个ref的value属性,否则返本身 2. toRef,给一个 reactive 对象的属性创建一个 ref 3. toRefs, 把一个响应式对象转换成普通对象,该普通对象的每个 property 都是一个 ref 4. isRef,判断一个值是否是ref 5. isProxy,判断一个对象是否是由 reactive 或者 readonly 方法创建的代理。 6. isReactive,判断一个对象是否是由 reactive 创建的响应式代理 7. isReadonly,判断一个对象是否是由 readonly 创建的只读代理。 仅给2、3写个例子吧 toRef,即把reactive 对象上的一个属性变成ref 什么意思呢,还是看上面的破坏了响应式的栗子 修复一下(即可把它一个属性抽出来做成响应式的ref,且它们还是互相关联的) <template> <div ref="test"> {{age}} <button @click="age++">add</button> </div> </template> <script> import { ref, reactive, computed, watch, provide, inject, readonly, toRef, } from "vue"; export default { props: { data: String, }, setup(props, context) { const obj = reactive({ name: "gxb", age: 18, }); const age=toRef(obj, "age"); watch(()=>obj.age,(newAge,oldAge)=>{ console.log(newAge); }) return { age }; }, }; </script> 复制代码 toRefs则是把这个对象里面的所有属性均整成ref,这个修复则更简单了 <template> <div ref="test"> {{age}} <button @click="age++">add</button> </div> </template> <script> import { ref, reactive, computed, watch, provide, inject, readonly, toRef, toRefs } from "vue"; export default { props: { data: String, }, setup(props, context) { const obj = reactive({ name: "gxb", age: 18, }); const obj02=toRefs(obj); return { ...obj02 }; }, }; </script> 复制代码 1.2. Teleport 传送门,顾名思义 场景:某一些组件中我们可能需要一个模态框的功能,然而虽然逻辑上来说这个模态框是属于该组件中的,但是实际操作一般需要把这个框子挂到body上。Teleport 组件就是帮我们处理这个问题的 来看栗子 假设组件中需要有一个模态框 <template> <div> <model></model> </div> </template> <script> import Model from './model' export default { components:{Model} } </script> 复制代码 模态框组件 <template> <div> <button @click="flag=true">点击</button> <teleport to="body"> <div v-if="flag"> <div>模态框</div> </div> </teleport> </div> </template> <script> import { ref } from "vue"; export default { setup() { const flag = ref(false); return { flag }; }, }; </script> 复制代码 即teleport组件的作用就是把teleport标签里面的元素传送到body上去 再看层级 1.3. Fragments 这个的理解就更简单了 原来只能这样:即只允许存在一个最外层的父元素div <template> <div> ... </div> </template> 复制代码 现在可多个 <template> <div> ... </div> <div> ... </div> ... </template> 复制代码 1.4. Emits Component Option 1.4.1 自定义事件派发 这里的重点:即多了一个派发事件的选项emits 也就我们以后再次使用emit派发一个事件的时候需要把这此派发的事件名放到选项里 栗子: <template> <div> <button @click="$emit('test')">点击</button> </div> </template> <script> export default { emits: ["test"], }; </script> 复制代码 注意:这里如果你派发的是一个原生事件,且没有把此事件放进emits选项中,其父组件的监听会被触发两次 <template> <div> <button @click="$emit('click')">点击</button> </div> </template> <script> export default { // emits: ["click"], }; </script> 复制代码 1.4.2 v-model vue3中的v-model,所借助的属性是 modelValue 所借助的事件是 update:modelValue (且3中把sync移出掉了) 栗子 父组件 <template> <div id="nav"> {{data}} <test05 v-model="data"></test05> </div> </template> <script> import { ref } from "vue"; import Test05 from "./components/test05"; export default { components: { Test05 }, setup() { const data=ref('gxb') return {data}; }, }; </script> 复制代码 子组件 <template> <div> <input type="text" :value="modelValue" @input="$emit('update:modelValue',$event.target.value)" /> </div> </template> <script> export default { props:{ modelValue:String }, emits:['update:modelValue'] } </script> 复制代码 自定义属性名,vue2.x中可通过 model选项 指定传过来的属性名和指定本次v-model要利用的事件。 vue3中也可自定属性 栗子 父组件(即在v-model后面指定绑定) <test05 v-model:foo="data"></test05> 复制代码 子组件 <template> <div> <input type="text" :value="foo" @input="$emit('update:foo',$event.target.value)" /> </div> </template> <script> export default { props:{ foo:String }, emits:['update:foo'] } </script> 复制代码 一个组件中可写多个v-model指令 栗子: 父组件 <test01 v-model:foo="a" v-model:bar="b"></test01> 复制代码 子组件 <template> <div> <input type="text" :value="foo" @input="$emit('update:foo',$event.target.value)" /> <input type="text" :value="bar" @input="$emit('update:bar',$event.target.value)" /> </div> </template> <script> export default { props: { foo: String, bar: String, }, emits: ["update:foo", "update:bar"], setup(props) { return {}; }, }; </script> 复制代码 1.5. createRendererAPI 自定义渲染器字如其名,它的主要功能是我们可以自定义 Virtual DOM 到DOM的方式,看了好3、4个大佬的栗子都是用canvas画了图。自己这里想不出什么栗子来先不写了 二、其他 2.1 Global API 在vue2.x中没有应用的概念,vue2.x中的所谓的“app”也只不过是一个通过Vue构造出的实例。但是2.x中的一些全局API(像mixins、use、 component等 )是直接在Vue构造函数中的 也即如果下面还有使用new Vue的“应用”,这些全局API很容易造成污染 下面来看一眼vue3的入口文件 import { createApp } from 'vue' import App from './App.vue' import router from './router' import store from './store' createApp(App).use(store).use(router).mount('#app') 复制代码 现在有了一个createApp,这个方法就返回一个应用程序的实例 拿component写个栗子 import { createApp, h } from 'vue' import App from './App.vue' import router from './router' import store from './store' createApp(App) .component('test06', { render() { return h('div', {}, '全局组件') } }) .use(store) .use(router) .mount('#app') 复制代码 其他API相应改变,如官网 Global API Treeshaking 官网是以 Vue.nextTick() ,这个全局API来举例的 这个摇树是什么玩意呢? 通过这句话可以理解,即这些个API不分青红皂白就写死在vue的构造函数上,如果我们在应用中本就没有对这些个API进行使用。那么打包时把这些东西也打进去是不是浪费性能同时也增大了打包体积呢 故vue3中,nextTick的使用也是需要从vue中导入一下的 import { nextTick } from 'vue' nextTick(() => { ... }) 复制代码 其他受影响的API 2.2 Template Directives v-model v-model上面已经写了,去掉了.sync,使用v-model进行了统一 v-if、v-for优先级问题 在2.x是v-for优先级高,在3.0中v-if的优先级高 2.3 Components 函数式组件 因为在vue3中函数式组件现在在性能上的提升可以忽略不计,还是推荐使用状态组件。 并且这里函数式组件只能通过纯函数进行声明,只能接受props和context(也是emit、slots、attrs) 简单搞个例子 这里偷个懒吧,把官网的栗子拿过来 vue2.x // Vue 2 Functional Component Example export default { functional: true, props: ['level'], render(h, { props, data, children }) { return h(`h${props.level}`, data, children) } } 复制代码 vue3,区别除了下面所要说的h函数变化问题,还有上面提到的参数问题,再一个变化就是去掉了 functional: true, import { h } from 'vue' const DynamicHeading = (props, context) => { return h(`h${props.level}`, context.attrs, context.slots) } DynamicHeading.props = ['level'] export default DynamicHeading 复制代码 单文件形式对比 2.x // Vue 2 Functional Component Example with <template> <template functional> <component :is="`h${props.level}`" v-bind="attrs" v-on="listeners" /> </template> <script> export default { props: ['level'] } </script> 复制代码 3.0,区别去掉了functional,监听器放进了$attrs且可删除 <template> <component v-bind:is="`h${props.level}`" v-bind="$attrs" /> </template> <script> export default { props: ['level'] } </script> 复制代码 异步组件 原来异步组件咋整的呢 const asyncPage = () => import('./NextPage.vue') 复制代码 或者带选项的 const asyncPage = { component: () => import('./NextPage.vue'), delay: 200, timeout: 3000, error: ErrorComponent, loading: LoadingComponent } 复制代码 但是vue3中不一样了,这里有一个新的API defineAsyncComponent用来显示定义异步组件 也即 const asyncPage = defineAsyncComponent(() => import('./NextPage.vue')) 复制代码 const asyncPageWithOptions = defineAsyncComponent({ loader: () => import('./NextPage.vue'), delay: 200, timeout: 3000, errorComponent: ErrorComponent, loadingComponent: LoadingComponent }) 复制代码 细心看也可看出component改成了loader 还有一点不同的是,2.x中函数参数中可接收resolve,reject,3.0则不可,但是必须要返回一个Promise 2.4 Render Function 渲染函数的改变 即原来的h函数是这样的 export default { render(h) { return h('div') } } 复制代码 而现在h函数则需要从vue的再导入进来 其实我上面有一个栗子已经用到了,再拿过来一次 import { createApp, h } from 'vue' import App from './App.vue' import router from './router' import store from './store' createApp(App) .component('test06', { render() { return h('div', {}, '全局组件') } }) .use(store) .use(router) .mount('#app') 复制代码 还有一个属性的变动,直接拿官网的栗子吧 2.x 中的节点属性格式 { class: ['button', 'is-outlined'], style: { color: '#34495E' }, attrs: { id: 'submit' }, domProps: { innerHTML: '' }, on: { click: submitForm }, key: 'submit-button' } 复制代码 在3.0中,这些属性不再被嵌套,被展平了(这看起来更像DOM节点上的东西了吧) { class: ['button', 'is-outlined'], style: { color: '#34495E' }, id: 'submit', innerHTML: '', onClick: submitForm, key: 'submit-button' } 复制代码 插槽方面 废掉了$scopedSlots,使用$slots vue2.x中,一个组件使用渲染函数拿插槽是这样的 <script> export default { render(h) { return h('div',{},this.$scopedSlots.default) }, } </script> 复制代码 vue3.x中则是这样的 <script> import {h} from 'vue' export default { props:{ data:String }, render() { return h('div',{},this.$slots.default()) }, } </script> 复制代码 2.5 Custom Elements 自定义元素白名单 如一些特殊的组件,我们要特殊用处的希望vue的编译忽略 栗子 直接往组件中放一个为注册过的组件 <test08></test08> 复制代码 不希望出现这个错就把它放进白名单里 使用构建工具版本 rules: [ { test: /\.vue$/, use: 'vue-loader', options: { compilerOptions: { isCustomElement: tag => tag === 'test08' } } } // ... ] 复制代码 运行时编译版本 const app = Vue.createApp({}) app.config.isCustomElement = tag => tag === 'test08' 复制代码 is只能用在<component> 但是 <component :is="componentId"></component> 除了这个玩意,我们别的地方可能也有需求需要用到is怎么办呢 故vue3中推出了v-is指令 就写到这了吧,也基本差不多了。撑不住了...晚安好梦
__label__pos
0.99258
Header Ads Widget XOR of numbers | hackerearth solution Problem You are given an array  of  numbers and  queries. In each query,  find  of all  numbers except the range [,]. Input format • The first line contains two integers  denoting the number of elements and  denoting the number of test cases. • The second line of each test case contains  space-separated integers. • The next  lines contain two integers  and . Output format For each test case, print the answer as described in the problem statement. Example  Let the array  be [1,2,3,4] and you are given a query (2,3), then print the  of all the array  without the range [2,3]. Hence, the answer will be 1{XOR}4 Constraints 1105 1105 1109 1,   Sample Input 5 2 1 2 3 4 5 2 3 3 4 Sample Output 0 6 Time Limit: 1 Memory Limit: 256 Source Limit:  Code(C++):- #include <iostream> #include <vector> using namespace std; int main() { ios_base::sync_with_stdio(false); cin.tie(nullptr); int n, q; cin >> n >> q; vector<int> a(n); for(int i = 0; i < n; i ++) cin >> a[i]; vector<int> presum(n + 1, 0); for(int i = 0; i < n; i ++) presum[i + 1] = presum[i] ^ a[i]; int l, r; while(q --){ cin >> l >> r; l --, r --; cout << (presum.back() ^ presum[r + 1] ^ presum[l]) << '\n'; } return 0; } Code(JAVA):- import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.io.PrintWriter; import java.util.*; class TestClass { public static PrintWriter out = new PrintWriter(System.out); // copied from codeforces public static class MyScanner { BufferedReader br; StringTokenizer st; public MyScanner() { br = new BufferedReader(new InputStreamReader(System.in)); } String next() { while (st == null || !st.hasMoreElements()) { try { st = new StringTokenizer(br.readLine()); } catch (IOException e) { e.printStackTrace(); } } return st.nextToken(); } int nextInt() { return Integer.parseInt(next()); } long nextLong() { return Long.parseLong(next()); } double nextDouble() { return Double.parseDouble(next()); } String nextLine(){ String str = ""; try { str = br.readLine(); } catch (IOException e) { e.printStackTrace(); } return str; } } static final int MAX = 100_010; static int[] A = new int[MAX]; static int[][] prefixSum = new int[MAX][32]; public static void main(String args[] ) throws Exception { final var scanner = new MyScanner(); final int N = scanner.nextInt(); final int Q = scanner.nextInt(); for(int i = 1; i <= N; i++){ A[i] = scanner.nextInt(); for(int b = 0; b < 32; b++){ prefixSum[i][b] = prefixSum[i-1][b]; if((A[i] & (1 << b)) != 0){ prefixSum[i][b]++; } } } for(int q = 1; q <= Q; q++){ final int l = scanner.nextInt(); final int r = scanner.nextInt(); int xor = 0; for(int b = 0; b < 32; b++){ if((prefixSum[N][b] - (prefixSum[r][b] - prefixSum[l-1][b])) % 2 != 0){ xor |= (1 << b); } } out.println(xor); } out.flush(); } } Recommended Post :- HCL Coding Questions:- Capgemini Coding Questions:- Companies interview:- Full C course:-     Key points:- Cracking the coding interview:-  Array and string:- Tree and graph:- Hackerearth Problems:- Hackerrank Problems:- Data structure:-  MCQs:- Post a Comment 0 Comments
__label__pos
0.999002
src/HOL/Number_Theory/Cong.thy author haftmann Sun Oct 08 22:28:22 2017 +0200 (19 months ago) changeset 66817 0b12755ccbb2 parent 66453 cc19f7ca2ed6 child 66837 6ba663ff2b1c permissions -rw-r--r-- euclidean rings need no normalization 1 (* Title: HOL/Number_Theory/Cong.thy 2 Author: Christophe Tabacznyj 3 Author: Lawrence C. Paulson 4 Author: Amine Chaieb 5 Author: Thomas M. Rasmussen 6 Author: Jeremy Avigad 7 8 Defines congruence (notation: [x = y] (mod z)) for natural numbers and 9 integers. 10 11 This file combines and revises a number of prior developments. 12 13 The original theories "GCD" and "Primes" were by Christophe Tabacznyj 14 and Lawrence C. Paulson, based on @{cite davenport92}. They introduced 15 gcd, lcm, and prime for the natural numbers. 16 17 The original theory "IntPrimes" was by Thomas M. Rasmussen, and 18 extended gcd, lcm, primes to the integers. Amine Chaieb provided 19 another extension of the notions to the integers, and added a number 20 of results to "Primes" and "GCD". 21 22 The original theory, "IntPrimes", by Thomas M. Rasmussen, defined and 23 developed the congruence relations on the integers. The notion was 24 extended to the natural numbers by Chaieb. Jeremy Avigad combined 25 these, revised and tidied them, made the development uniform for the 26 natural numbers and the integers, and added a number of new theorems. 27 *) 28 29 section \<open>Congruence\<close> 30 31 theory Cong 32 imports "HOL-Computational_Algebra.Primes" 33 begin 34 35 subsection \<open>Turn off \<open>One_nat_def\<close>\<close> 36 37 lemma power_eq_one_eq_nat [simp]: "x^m = 1 \<longleftrightarrow> m = 0 \<or> x = 1" 38 for x m :: nat 39 by (induct m) auto 40 41 declare mod_pos_pos_trivial [simp] 42 43 44 subsection \<open>Main definitions\<close> 45 46 class cong = 47 fixes cong :: "'a \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool" ("(1[_ = _] '(()mod _'))") 48 begin 49 50 abbreviation notcong :: "'a \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool" ("(1[_ \<noteq> _] '(()mod _'))") 51 where "notcong x y m \<equiv> \<not> cong x y m" 52 53 end 54 55 56 subsubsection \<open>Definitions for the natural numbers\<close> 57 58 instantiation nat :: cong 59 begin 60 61 definition cong_nat :: "nat \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> bool" 62 where "cong_nat x y m \<longleftrightarrow> x mod m = y mod m" 63 64 instance .. 65 66 end 67 68 69 subsubsection \<open>Definitions for the integers\<close> 70 71 instantiation int :: cong 72 begin 73 74 definition cong_int :: "int \<Rightarrow> int \<Rightarrow> int \<Rightarrow> bool" 75 where "cong_int x y m \<longleftrightarrow> x mod m = y mod m" 76 77 instance .. 78 79 end 80 81 82 subsection \<open>Set up Transfer\<close> 83 84 85 lemma transfer_nat_int_cong: 86 "x \<ge> 0 \<Longrightarrow> y \<ge> 0 \<Longrightarrow> m \<ge> 0 \<Longrightarrow> [nat x = nat y] (mod (nat m)) \<longleftrightarrow> [x = y] (mod m)" 87 for x y m :: int 88 unfolding cong_int_def cong_nat_def 89 by (metis int_nat_eq nat_mod_distrib zmod_int) 90 91 declare transfer_morphism_nat_int [transfer add return: transfer_nat_int_cong] 92 93 lemma transfer_int_nat_cong: "[int x = int y] (mod (int m)) = [x = y] (mod m)" 94 by (auto simp add: cong_int_def cong_nat_def) (auto simp add: zmod_int [symmetric]) 95 96 declare transfer_morphism_int_nat [transfer add return: transfer_int_nat_cong] 97 98 99 subsection \<open>Congruence\<close> 100 101 (* was zcong_0, etc. *) 102 lemma cong_0_nat [simp, presburger]: "[a = b] (mod 0) \<longleftrightarrow> a = b" 103 for a b :: nat 104 by (auto simp: cong_nat_def) 105 106 lemma cong_0_int [simp, presburger]: "[a = b] (mod 0) \<longleftrightarrow> a = b" 107 for a b :: int 108 by (auto simp: cong_int_def) 109 110 lemma cong_1_nat [simp, presburger]: "[a = b] (mod 1)" 111 for a b :: nat 112 by (auto simp: cong_nat_def) 113 114 lemma cong_Suc_0_nat [simp, presburger]: "[a = b] (mod Suc 0)" 115 for a b :: nat 116 by (auto simp: cong_nat_def) 117 118 lemma cong_1_int [simp, presburger]: "[a = b] (mod 1)" 119 for a b :: int 120 by (auto simp: cong_int_def) 121 122 lemma cong_refl_nat [simp]: "[k = k] (mod m)" 123 for k :: nat 124 by (auto simp: cong_nat_def) 125 126 lemma cong_refl_int [simp]: "[k = k] (mod m)" 127 for k :: int 128 by (auto simp: cong_int_def) 129 130 lemma cong_sym_nat: "[a = b] (mod m) \<Longrightarrow> [b = a] (mod m)" 131 for a b :: nat 132 by (auto simp: cong_nat_def) 133 134 lemma cong_sym_int: "[a = b] (mod m) \<Longrightarrow> [b = a] (mod m)" 135 for a b :: int 136 by (auto simp: cong_int_def) 137 138 lemma cong_sym_eq_nat: "[a = b] (mod m) = [b = a] (mod m)" 139 for a b :: nat 140 by (auto simp: cong_nat_def) 141 142 lemma cong_sym_eq_int: "[a = b] (mod m) = [b = a] (mod m)" 143 for a b :: int 144 by (auto simp: cong_int_def) 145 146 lemma cong_trans_nat [trans]: "[a = b] (mod m) \<Longrightarrow> [b = c] (mod m) \<Longrightarrow> [a = c] (mod m)" 147 for a b c :: nat 148 by (auto simp: cong_nat_def) 149 150 lemma cong_trans_int [trans]: "[a = b] (mod m) \<Longrightarrow> [b = c] (mod m) \<Longrightarrow> [a = c] (mod m)" 151 for a b c :: int 152 by (auto simp: cong_int_def) 153 154 lemma cong_add_nat: "[a = b] (mod m) \<Longrightarrow> [c = d] (mod m) \<Longrightarrow> [a + c = b + d] (mod m)" 155 for a b c :: nat 156 unfolding cong_nat_def by (metis mod_add_cong) 157 158 lemma cong_add_int: "[a = b] (mod m) \<Longrightarrow> [c = d] (mod m) \<Longrightarrow> [a + c = b + d] (mod m)" 159 for a b c :: int 160 unfolding cong_int_def by (metis mod_add_cong) 161 162 lemma cong_diff_int: "[a = b] (mod m) \<Longrightarrow> [c = d] (mod m) \<Longrightarrow> [a - c = b - d] (mod m)" 163 for a b c :: int 164 unfolding cong_int_def by (metis mod_diff_cong) 165 166 lemma cong_diff_aux_int: 167 "[a = b] (mod m) \<Longrightarrow> [c = d] (mod m) \<Longrightarrow> 168 a \<ge> c \<Longrightarrow> b \<ge> d \<Longrightarrow> [tsub a c = tsub b d] (mod m)" 169 for a b c d :: int 170 by (metis cong_diff_int tsub_eq) 171 172 lemma cong_diff_nat: 173 fixes a b c d :: nat 174 assumes "[a = b] (mod m)" "[c = d] (mod m)" "a \<ge> c" "b \<ge> d" 175 shows "[a - c = b - d] (mod m)" 176 using assms by (rule cong_diff_aux_int [transferred]) 177 178 lemma cong_mult_nat: "[a = b] (mod m) \<Longrightarrow> [c = d] (mod m) \<Longrightarrow> [a * c = b * d] (mod m)" 179 for a b c d :: nat 180 unfolding cong_nat_def by (metis mod_mult_cong) 181 182 lemma cong_mult_int: "[a = b] (mod m) \<Longrightarrow> [c = d] (mod m) \<Longrightarrow> [a * c = b * d] (mod m)" 183 for a b c d :: int 184 unfolding cong_int_def by (metis mod_mult_cong) 185 186 lemma cong_exp_nat: "[x = y] (mod n) \<Longrightarrow> [x^k = y^k] (mod n)" 187 for x y :: nat 188 by (induct k) (auto simp: cong_mult_nat) 189 190 lemma cong_exp_int: "[x = y] (mod n) \<Longrightarrow> [x^k = y^k] (mod n)" 191 for x y :: int 192 by (induct k) (auto simp: cong_mult_int) 193 194 lemma cong_sum_nat: "(\<And>x. x \<in> A \<Longrightarrow> [f x = g x] (mod m)) \<Longrightarrow> [(\<Sum>x\<in>A. f x) = (\<Sum>x\<in>A. g x)] (mod m)" 195 for f g :: "'a \<Rightarrow> nat" 196 by (induct A rule: infinite_finite_induct) (auto intro: cong_add_nat) 197 198 lemma cong_sum_int: "(\<And>x. x \<in> A \<Longrightarrow> [f x = g x] (mod m)) \<Longrightarrow> [(\<Sum>x\<in>A. f x) = (\<Sum>x\<in>A. g x)] (mod m)" 199 for f g :: "'a \<Rightarrow> int" 200 by (induct A rule: infinite_finite_induct) (auto intro: cong_add_int) 201 202 lemma cong_prod_nat: "(\<And>x. x \<in> A \<Longrightarrow> [f x = g x] (mod m)) \<Longrightarrow> [(\<Prod>x\<in>A. f x) = (\<Prod>x\<in>A. g x)] (mod m)" 203 for f g :: "'a \<Rightarrow> nat" 204 by (induct A rule: infinite_finite_induct) (auto intro: cong_mult_nat) 205 206 lemma cong_prod_int: "(\<And>x. x \<in> A \<Longrightarrow> [f x = g x] (mod m)) \<Longrightarrow> [(\<Prod>x\<in>A. f x) = (\<Prod>x\<in>A. g x)] (mod m)" 207 for f g :: "'a \<Rightarrow> int" 208 by (induct A rule: infinite_finite_induct) (auto intro: cong_mult_int) 209 210 lemma cong_scalar_nat: "[a = b] (mod m) \<Longrightarrow> [a * k = b * k] (mod m)" 211 for a b k :: nat 212 by (rule cong_mult_nat) simp_all 213 214 lemma cong_scalar_int: "[a = b] (mod m) \<Longrightarrow> [a * k = b * k] (mod m)" 215 for a b k :: int 216 by (rule cong_mult_int) simp_all 217 218 lemma cong_scalar2_nat: "[a = b] (mod m) \<Longrightarrow> [k * a = k * b] (mod m)" 219 for a b k :: nat 220 by (rule cong_mult_nat) simp_all 221 222 lemma cong_scalar2_int: "[a = b] (mod m) \<Longrightarrow> [k * a = k * b] (mod m)" 223 for a b k :: int 224 by (rule cong_mult_int) simp_all 225 226 lemma cong_mult_self_nat: "[a * m = 0] (mod m)" 227 for a m :: nat 228 by (auto simp: cong_nat_def) 229 230 lemma cong_mult_self_int: "[a * m = 0] (mod m)" 231 for a m :: int 232 by (auto simp: cong_int_def) 233 234 lemma cong_eq_diff_cong_0_int: "[a = b] (mod m) = [a - b = 0] (mod m)" 235 for a b :: int 236 by (metis cong_add_int cong_diff_int cong_refl_int diff_add_cancel diff_self) 237 238 lemma cong_eq_diff_cong_0_aux_int: "a \<ge> b \<Longrightarrow> [a = b] (mod m) = [tsub a b = 0] (mod m)" 239 for a b :: int 240 by (subst tsub_eq, assumption, rule cong_eq_diff_cong_0_int) 241 242 lemma cong_eq_diff_cong_0_nat: 243 fixes a b :: nat 244 assumes "a \<ge> b" 245 shows "[a = b] (mod m) = [a - b = 0] (mod m)" 246 using assms by (rule cong_eq_diff_cong_0_aux_int [transferred]) 247 248 lemma cong_diff_cong_0'_nat: 249 "[x = y] (mod n) \<longleftrightarrow> (if x \<le> y then [y - x = 0] (mod n) else [x - y = 0] (mod n))" 250 for x y :: nat 251 by (metis cong_eq_diff_cong_0_nat cong_sym_nat nat_le_linear) 252 253 lemma cong_altdef_nat: "a \<ge> b \<Longrightarrow> [a = b] (mod m) \<longleftrightarrow> m dvd (a - b)" 254 for a b :: nat 255 apply (subst cong_eq_diff_cong_0_nat, assumption) 256 apply (unfold cong_nat_def) 257 apply (simp add: dvd_eq_mod_eq_0 [symmetric]) 258 done 259 260 lemma cong_altdef_int: "[a = b] (mod m) \<longleftrightarrow> m dvd (a - b)" 261 for a b :: int 262 by (metis cong_int_def mod_eq_dvd_iff) 263 264 lemma cong_abs_int: "[x = y] (mod abs m) \<longleftrightarrow> [x = y] (mod m)" 265 for x y :: int 266 by (simp add: cong_altdef_int) 267 268 lemma cong_square_int: 269 "prime p \<Longrightarrow> 0 < a \<Longrightarrow> [a * a = 1] (mod p) \<Longrightarrow> [a = 1] (mod p) \<or> [a = - 1] (mod p)" 270 for a :: int 271 apply (simp only: cong_altdef_int) 272 apply (subst prime_dvd_mult_eq_int [symmetric], assumption) 273 apply (auto simp add: field_simps) 274 done 275 276 lemma cong_mult_rcancel_int: "coprime k m \<Longrightarrow> [a * k = b * k] (mod m) = [a = b] (mod m)" 277 for a k m :: int 278 by (metis cong_altdef_int left_diff_distrib coprime_dvd_mult_iff gcd.commute) 279 280 lemma cong_mult_rcancel_nat: "coprime k m \<Longrightarrow> [a * k = b * k] (mod m) = [a = b] (mod m)" 281 for a k m :: nat 282 by (metis cong_mult_rcancel_int [transferred]) 283 284 lemma cong_mult_lcancel_nat: "coprime k m \<Longrightarrow> [k * a = k * b ] (mod m) = [a = b] (mod m)" 285 for a k m :: nat 286 by (simp add: mult.commute cong_mult_rcancel_nat) 287 288 lemma cong_mult_lcancel_int: "coprime k m \<Longrightarrow> [k * a = k * b] (mod m) = [a = b] (mod m)" 289 for a k m :: int 290 by (simp add: mult.commute cong_mult_rcancel_int) 291 292 (* was zcong_zgcd_zmult_zmod *) 293 lemma coprime_cong_mult_int: 294 "[a = b] (mod m) \<Longrightarrow> [a = b] (mod n) \<Longrightarrow> coprime m n \<Longrightarrow> [a = b] (mod m * n)" 295 for a b :: int 296 by (metis divides_mult cong_altdef_int) 297 298 lemma coprime_cong_mult_nat: 299 "[a = b] (mod m) \<Longrightarrow> [a = b] (mod n) \<Longrightarrow> coprime m n \<Longrightarrow> [a = b] (mod m * n)" 300 for a b :: nat 301 by (metis coprime_cong_mult_int [transferred]) 302 303 lemma cong_less_imp_eq_nat: "0 \<le> a \<Longrightarrow> a < m \<Longrightarrow> 0 \<le> b \<Longrightarrow> b < m \<Longrightarrow> [a = b] (mod m) \<Longrightarrow> a = b" 304 for a b :: nat 305 by (auto simp add: cong_nat_def) 306 307 lemma cong_less_imp_eq_int: "0 \<le> a \<Longrightarrow> a < m \<Longrightarrow> 0 \<le> b \<Longrightarrow> b < m \<Longrightarrow> [a = b] (mod m) \<Longrightarrow> a = b" 308 for a b :: int 309 by (auto simp add: cong_int_def) 310 311 lemma cong_less_unique_nat: "0 < m \<Longrightarrow> (\<exists>!b. 0 \<le> b \<and> b < m \<and> [a = b] (mod m))" 312 for a m :: nat 313 by (auto simp: cong_nat_def) (metis mod_less_divisor mod_mod_trivial) 314 315 lemma cong_less_unique_int: "0 < m \<Longrightarrow> (\<exists>!b. 0 \<le> b \<and> b < m \<and> [a = b] (mod m))" 316 for a m :: int 317 by (auto simp: cong_int_def) (metis mod_mod_trivial pos_mod_conj) 318 319 lemma cong_iff_lin_int: "[a = b] (mod m) \<longleftrightarrow> (\<exists>k. b = a + m * k)" 320 for a b :: int 321 apply (auto simp add: cong_altdef_int dvd_def) 322 apply (rule_tac [!] x = "-k" in exI, auto) 323 done 324 325 lemma cong_iff_lin_nat: "([a = b] (mod m)) \<longleftrightarrow> (\<exists>k1 k2. b + k1 * m = a + k2 * m)" 326 (is "?lhs = ?rhs") 327 for a b :: nat 328 proof 329 assume ?lhs 330 show ?rhs 331 proof (cases "b \<le> a") 332 case True 333 with \<open>?lhs\<close> show ?rhs 334 by (metis cong_altdef_nat dvd_def le_add_diff_inverse add_0_right mult_0 mult.commute) 335 next 336 case False 337 with \<open>?lhs\<close> show ?rhs 338 apply (subst (asm) cong_sym_eq_nat) 339 apply (auto simp: cong_altdef_nat) 340 apply (metis add_0_right add_diff_inverse dvd_div_mult_self less_or_eq_imp_le mult_0) 341 done 342 qed 343 next 344 assume ?rhs 345 then show ?lhs 346 by (metis cong_nat_def mod_mult_self2 mult.commute) 347 qed 348 349 lemma cong_gcd_eq_int: "[a = b] (mod m) \<Longrightarrow> gcd a m = gcd b m" 350 for a b :: int 351 by (metis cong_int_def gcd_red_int) 352 353 lemma cong_gcd_eq_nat: "[a = b] (mod m) \<Longrightarrow> gcd a m = gcd b m" 354 for a b :: nat 355 by (metis cong_gcd_eq_int [transferred]) 356 357 lemma cong_imp_coprime_nat: "[a = b] (mod m) \<Longrightarrow> coprime a m \<Longrightarrow> coprime b m" 358 for a b :: nat 359 by (auto simp add: cong_gcd_eq_nat) 360 361 lemma cong_imp_coprime_int: "[a = b] (mod m) \<Longrightarrow> coprime a m \<Longrightarrow> coprime b m" 362 for a b :: int 363 by (auto simp add: cong_gcd_eq_int) 364 365 lemma cong_cong_mod_nat: "[a = b] (mod m) \<longleftrightarrow> [a mod m = b mod m] (mod m)" 366 for a b :: nat 367 by (auto simp add: cong_nat_def) 368 369 lemma cong_cong_mod_int: "[a = b] (mod m) \<longleftrightarrow> [a mod m = b mod m] (mod m)" 370 for a b :: int 371 by (auto simp add: cong_int_def) 372 373 lemma cong_minus_int [iff]: "[a = b] (mod - m) \<longleftrightarrow> [a = b] (mod m)" 374 for a b :: int 375 by (metis cong_iff_lin_int minus_equation_iff mult_minus_left mult_minus_right) 376 377 (* 378 lemma mod_dvd_mod_int: 379 "0 < (m::int) \<Longrightarrow> m dvd b \<Longrightarrow> (a mod b mod m) = (a mod m)" 380 apply (unfold dvd_def, auto) 381 apply (rule mod_mod_cancel) 382 apply auto 383 done 384 385 lemma mod_dvd_mod: 386 assumes "0 < (m::nat)" and "m dvd b" 387 shows "(a mod b mod m) = (a mod m)" 388 389 apply (rule mod_dvd_mod_int [transferred]) 390 using assms apply auto 391 done 392 *) 393 394 lemma cong_add_lcancel_nat: "[a + x = a + y] (mod n) \<longleftrightarrow> [x = y] (mod n)" 395 for a x y :: nat 396 by (simp add: cong_iff_lin_nat) 397 398 lemma cong_add_lcancel_int: "[a + x = a + y] (mod n) \<longleftrightarrow> [x = y] (mod n)" 399 for a x y :: int 400 by (simp add: cong_iff_lin_int) 401 402 lemma cong_add_rcancel_nat: "[x + a = y + a] (mod n) \<longleftrightarrow> [x = y] (mod n)" 403 for a x y :: nat 404 by (simp add: cong_iff_lin_nat) 405 406 lemma cong_add_rcancel_int: "[x + a = y + a] (mod n) \<longleftrightarrow> [x = y] (mod n)" 407 for a x y :: int 408 by (simp add: cong_iff_lin_int) 409 410 lemma cong_add_lcancel_0_nat: "[a + x = a] (mod n) \<longleftrightarrow> [x = 0] (mod n)" 411 for a x :: nat 412 by (simp add: cong_iff_lin_nat) 413 414 lemma cong_add_lcancel_0_int: "[a + x = a] (mod n) \<longleftrightarrow> [x = 0] (mod n)" 415 for a x :: int 416 by (simp add: cong_iff_lin_int) 417 418 lemma cong_add_rcancel_0_nat: "[x + a = a] (mod n) \<longleftrightarrow> [x = 0] (mod n)" 419 for a x :: nat 420 by (simp add: cong_iff_lin_nat) 421 422 lemma cong_add_rcancel_0_int: "[x + a = a] (mod n) \<longleftrightarrow> [x = 0] (mod n)" 423 for a x :: int 424 by (simp add: cong_iff_lin_int) 425 426 lemma cong_dvd_modulus_nat: "[x = y] (mod m) \<Longrightarrow> n dvd m \<Longrightarrow> [x = y] (mod n)" 427 for x y :: nat 428 apply (auto simp add: cong_iff_lin_nat dvd_def) 429 apply (rule_tac x= "k1 * k" in exI) 430 apply (rule_tac x= "k2 * k" in exI) 431 apply (simp add: field_simps) 432 done 433 434 lemma cong_dvd_modulus_int: "[x = y] (mod m) \<Longrightarrow> n dvd m \<Longrightarrow> [x = y] (mod n)" 435 for x y :: int 436 by (auto simp add: cong_altdef_int dvd_def) 437 438 lemma cong_dvd_eq_nat: "[x = y] (mod n) \<Longrightarrow> n dvd x \<longleftrightarrow> n dvd y" 439 for x y :: nat 440 by (auto simp: cong_nat_def dvd_eq_mod_eq_0) 441 442 lemma cong_dvd_eq_int: "[x = y] (mod n) \<Longrightarrow> n dvd x \<longleftrightarrow> n dvd y" 443 for x y :: int 444 by (auto simp: cong_int_def dvd_eq_mod_eq_0) 445 446 lemma cong_mod_nat: "n \<noteq> 0 \<Longrightarrow> [a mod n = a] (mod n)" 447 for a n :: nat 448 by (simp add: cong_nat_def) 449 450 lemma cong_mod_int: "n \<noteq> 0 \<Longrightarrow> [a mod n = a] (mod n)" 451 for a n :: int 452 by (simp add: cong_int_def) 453 454 lemma mod_mult_cong_nat: "a \<noteq> 0 \<Longrightarrow> b \<noteq> 0 \<Longrightarrow> [x mod (a * b) = y] (mod a) \<longleftrightarrow> [x = y] (mod a)" 455 for a b :: nat 456 by (simp add: cong_nat_def mod_mult2_eq mod_add_left_eq) 457 458 lemma neg_cong_int: "[a = b] (mod m) \<longleftrightarrow> [- a = - b] (mod m)" 459 for a b :: int 460 by (metis cong_int_def minus_minus mod_minus_cong) 461 462 lemma cong_modulus_neg_int: "[a = b] (mod m) \<longleftrightarrow> [a = b] (mod - m)" 463 for a b :: int 464 by (auto simp add: cong_altdef_int) 465 466 lemma mod_mult_cong_int: "a \<noteq> 0 \<Longrightarrow> b \<noteq> 0 \<Longrightarrow> [x mod (a * b) = y] (mod a) \<longleftrightarrow> [x = y] (mod a)" 467 for a b :: int 468 proof (cases "b > 0") 469 case True 470 then show ?thesis 471 by (simp add: cong_int_def mod_mod_cancel mod_add_left_eq) 472 next 473 case False 474 then show ?thesis 475 apply (subst (1 2) cong_modulus_neg_int) 476 apply (unfold cong_int_def) 477 apply (subgoal_tac "a * b = (- a * - b)") 478 apply (erule ssubst) 479 apply (subst zmod_zmult2_eq) 480 apply (auto simp add: mod_add_left_eq mod_minus_right div_minus_right) 481 apply (metis mod_diff_left_eq mod_diff_right_eq mod_mult_self1_is_0 diff_zero)+ 482 done 483 qed 484 485 lemma cong_to_1_nat: 486 fixes a :: nat 487 assumes "[a = 1] (mod n)" 488 shows "n dvd (a - 1)" 489 proof (cases "a = 0") 490 case True 491 then show ?thesis by force 492 next 493 case False 494 with assms show ?thesis by (metis cong_altdef_nat leI less_one) 495 qed 496 497 lemma cong_0_1_nat': "[0 = Suc 0] (mod n) \<longleftrightarrow> n = Suc 0" 498 by (auto simp: cong_nat_def) 499 500 lemma cong_0_1_nat: "[0 = 1] (mod n) \<longleftrightarrow> n = 1" 501 for n :: nat 502 by (auto simp: cong_nat_def) 503 504 lemma cong_0_1_int: "[0 = 1] (mod n) \<longleftrightarrow> n = 1 \<or> n = - 1" 505 for n :: int 506 by (auto simp: cong_int_def zmult_eq_1_iff) 507 508 lemma cong_to_1'_nat: "[a = 1] (mod n) \<longleftrightarrow> a = 0 \<and> n = 1 \<or> (\<exists>m. a = 1 + m * n)" 509 for a :: nat 510 by (metis add.right_neutral cong_0_1_nat cong_iff_lin_nat cong_to_1_nat 511 dvd_div_mult_self leI le_add_diff_inverse less_one mult_eq_if) 512 513 lemma cong_le_nat: "y \<le> x \<Longrightarrow> [x = y] (mod n) \<longleftrightarrow> (\<exists>q. x = q * n + y)" 514 for x y :: nat 515 by (metis cong_altdef_nat Nat.le_imp_diff_is_add dvd_def mult.commute) 516 517 lemma cong_solve_nat: 518 fixes a :: nat 519 assumes "a \<noteq> 0" 520 shows "\<exists>x. [a * x = gcd a n] (mod n)" 521 proof (cases "n = 0") 522 case True 523 then show ?thesis by force 524 next 525 case False 526 then show ?thesis 527 using bezout_nat [of a n, OF \<open>a \<noteq> 0\<close>] 528 by auto (metis cong_add_rcancel_0_nat cong_mult_self_nat mult.commute) 529 qed 530 531 lemma cong_solve_int: "a \<noteq> 0 \<Longrightarrow> \<exists>x. [a * x = gcd a n] (mod n)" 532 for a :: int 533 apply (cases "n = 0") 534 apply (cases "a \<ge> 0") 535 apply auto 536 apply (rule_tac x = "-1" in exI) 537 apply auto 538 apply (insert bezout_int [of a n], auto) 539 apply (metis cong_iff_lin_int mult.commute) 540 done 541 542 lemma cong_solve_dvd_nat: 543 fixes a :: nat 544 assumes a: "a \<noteq> 0" and b: "gcd a n dvd d" 545 shows "\<exists>x. [a * x = d] (mod n)" 546 proof - 547 from cong_solve_nat [OF a] obtain x where "[a * x = gcd a n](mod n)" 548 by auto 549 then have "[(d div gcd a n) * (a * x) = (d div gcd a n) * gcd a n] (mod n)" 550 by (elim cong_scalar2_nat) 551 also from b have "(d div gcd a n) * gcd a n = d" 552 by (rule dvd_div_mult_self) 553 also have "(d div gcd a n) * (a * x) = a * (d div gcd a n * x)" 554 by auto 555 finally show ?thesis 556 by auto 557 qed 558 559 lemma cong_solve_dvd_int: 560 assumes a: "(a::int) \<noteq> 0" and b: "gcd a n dvd d" 561 shows "\<exists>x. [a * x = d] (mod n)" 562 proof - 563 from cong_solve_int [OF a] obtain x where "[a * x = gcd a n](mod n)" 564 by auto 565 then have "[(d div gcd a n) * (a * x) = (d div gcd a n) * gcd a n] (mod n)" 566 by (elim cong_scalar2_int) 567 also from b have "(d div gcd a n) * gcd a n = d" 568 by (rule dvd_div_mult_self) 569 also have "(d div gcd a n) * (a * x) = a * (d div gcd a n * x)" 570 by auto 571 finally show ?thesis 572 by auto 573 qed 574 575 lemma cong_solve_coprime_nat: 576 fixes a :: nat 577 assumes "coprime a n" 578 shows "\<exists>x. [a * x = 1] (mod n)" 579 proof (cases "a = 0") 580 case True 581 with assms show ?thesis by force 582 next 583 case False 584 with assms show ?thesis by (metis cong_solve_nat) 585 qed 586 587 lemma cong_solve_coprime_int: "coprime (a::int) n \<Longrightarrow> \<exists>x. [a * x = 1] (mod n)" 588 apply (cases "a = 0") 589 apply auto 590 apply (cases "n \<ge> 0") 591 apply auto 592 apply (metis cong_solve_int) 593 done 594 595 lemma coprime_iff_invertible_nat: 596 "m > 0 \<Longrightarrow> coprime a m = (\<exists>x. [a * x = Suc 0] (mod m))" 597 by (metis One_nat_def cong_gcd_eq_nat cong_solve_coprime_nat coprime_lmult gcd.commute gcd_Suc_0) 598 599 lemma coprime_iff_invertible_int: "m > 0 \<Longrightarrow> coprime a m \<longleftrightarrow> (\<exists>x. [a * x = 1] (mod m))" 600 for m :: int 601 apply (auto intro: cong_solve_coprime_int) 602 apply (metis cong_int_def coprime_mul_eq gcd_1_int gcd.commute gcd_red_int) 603 done 604 605 lemma coprime_iff_invertible'_nat: 606 "m > 0 \<Longrightarrow> coprime a m \<longleftrightarrow> (\<exists>x. 0 \<le> x \<and> x < m \<and> [a * x = Suc 0] (mod m))" 607 apply (subst coprime_iff_invertible_nat) 608 apply auto 609 apply (auto simp add: cong_nat_def) 610 apply (metis mod_less_divisor mod_mult_right_eq) 611 done 612 613 lemma coprime_iff_invertible'_int: 614 "m > 0 \<Longrightarrow> coprime a m \<longleftrightarrow> (\<exists>x. 0 \<le> x \<and> x < m \<and> [a * x = 1] (mod m))" 615 for m :: int 616 apply (subst coprime_iff_invertible_int) 617 apply (auto simp add: cong_int_def) 618 apply (metis mod_mult_right_eq pos_mod_conj) 619 done 620 621 lemma cong_cong_lcm_nat: "[x = y] (mod a) \<Longrightarrow> [x = y] (mod b) \<Longrightarrow> [x = y] (mod lcm a b)" 622 for x y :: nat 623 apply (cases "y \<le> x") 624 apply (metis cong_altdef_nat lcm_least) 625 apply (meson cong_altdef_nat cong_sym_nat lcm_least_iff nat_le_linear) 626 done 627 628 lemma cong_cong_lcm_int: "[x = y] (mod a) \<Longrightarrow> [x = y] (mod b) \<Longrightarrow> [x = y] (mod lcm a b)" 629 for x y :: int 630 by (auto simp add: cong_altdef_int lcm_least) 631 632 lemma cong_cong_prod_coprime_nat [rule_format]: "finite A \<Longrightarrow> 633 (\<forall>i\<in>A. (\<forall>j\<in>A. i \<noteq> j \<longrightarrow> coprime (m i) (m j))) \<longrightarrow> 634 (\<forall>i\<in>A. [(x::nat) = y] (mod m i)) \<longrightarrow> 635 [x = y] (mod (\<Prod>i\<in>A. m i))" 636 apply (induct set: finite) 637 apply auto 638 apply (metis One_nat_def coprime_cong_mult_nat gcd.commute prod_coprime) 639 done 640 641 lemma cong_cong_prod_coprime_int [rule_format]: "finite A \<Longrightarrow> 642 (\<forall>i\<in>A. (\<forall>j\<in>A. i \<noteq> j \<longrightarrow> coprime (m i) (m j))) \<longrightarrow> 643 (\<forall>i\<in>A. [(x::int) = y] (mod m i)) \<longrightarrow> 644 [x = y] (mod (\<Prod>i\<in>A. m i))" 645 apply (induct set: finite) 646 apply auto 647 apply (metis coprime_cong_mult_int gcd.commute prod_coprime) 648 done 649 650 lemma binary_chinese_remainder_aux_nat: 651 fixes m1 m2 :: nat 652 assumes a: "coprime m1 m2" 653 shows "\<exists>b1 b2. [b1 = 1] (mod m1) \<and> [b1 = 0] (mod m2) \<and> [b2 = 0] (mod m1) \<and> [b2 = 1] (mod m2)" 654 proof - 655 from cong_solve_coprime_nat [OF a] obtain x1 where 1: "[m1 * x1 = 1] (mod m2)" 656 by auto 657 from a have b: "coprime m2 m1" 658 by (subst gcd.commute) 659 from cong_solve_coprime_nat [OF b] obtain x2 where 2: "[m2 * x2 = 1] (mod m1)" 660 by auto 661 have "[m1 * x1 = 0] (mod m1)" 662 by (subst mult.commute) (rule cong_mult_self_nat) 663 moreover have "[m2 * x2 = 0] (mod m2)" 664 by (subst mult.commute) (rule cong_mult_self_nat) 665 ultimately show ?thesis 666 using 1 2 by blast 667 qed 668 669 lemma binary_chinese_remainder_aux_int: 670 fixes m1 m2 :: int 671 assumes a: "coprime m1 m2" 672 shows "\<exists>b1 b2. [b1 = 1] (mod m1) \<and> [b1 = 0] (mod m2) \<and> [b2 = 0] (mod m1) \<and> [b2 = 1] (mod m2)" 673 proof - 674 from cong_solve_coprime_int [OF a] obtain x1 where 1: "[m1 * x1 = 1] (mod m2)" 675 by auto 676 from a have b: "coprime m2 m1" 677 by (subst gcd.commute) 678 from cong_solve_coprime_int [OF b] obtain x2 where 2: "[m2 * x2 = 1] (mod m1)" 679 by auto 680 have "[m1 * x1 = 0] (mod m1)" 681 by (subst mult.commute) (rule cong_mult_self_int) 682 moreover have "[m2 * x2 = 0] (mod m2)" 683 by (subst mult.commute) (rule cong_mult_self_int) 684 ultimately show ?thesis 685 using 1 2 by blast 686 qed 687 688 lemma binary_chinese_remainder_nat: 689 fixes m1 m2 :: nat 690 assumes a: "coprime m1 m2" 691 shows "\<exists>x. [x = u1] (mod m1) \<and> [x = u2] (mod m2)" 692 proof - 693 from binary_chinese_remainder_aux_nat [OF a] obtain b1 b2 694 where "[b1 = 1] (mod m1)" and "[b1 = 0] (mod m2)" 695 and "[b2 = 0] (mod m1)" and "[b2 = 1] (mod m2)" 696 by blast 697 let ?x = "u1 * b1 + u2 * b2" 698 have "[?x = u1 * 1 + u2 * 0] (mod m1)" 699 apply (rule cong_add_nat) 700 apply (rule cong_scalar2_nat) 701 apply (rule \<open>[b1 = 1] (mod m1)\<close>) 702 apply (rule cong_scalar2_nat) 703 apply (rule \<open>[b2 = 0] (mod m1)\<close>) 704 done 705 then have "[?x = u1] (mod m1)" by simp 706 have "[?x = u1 * 0 + u2 * 1] (mod m2)" 707 apply (rule cong_add_nat) 708 apply (rule cong_scalar2_nat) 709 apply (rule \<open>[b1 = 0] (mod m2)\<close>) 710 apply (rule cong_scalar2_nat) 711 apply (rule \<open>[b2 = 1] (mod m2)\<close>) 712 done 713 then have "[?x = u2] (mod m2)" 714 by simp 715 with \<open>[?x = u1] (mod m1)\<close> show ?thesis 716 by blast 717 qed 718 719 lemma binary_chinese_remainder_int: 720 fixes m1 m2 :: int 721 assumes a: "coprime m1 m2" 722 shows "\<exists>x. [x = u1] (mod m1) \<and> [x = u2] (mod m2)" 723 proof - 724 from binary_chinese_remainder_aux_int [OF a] obtain b1 b2 725 where "[b1 = 1] (mod m1)" and "[b1 = 0] (mod m2)" 726 and "[b2 = 0] (mod m1)" and "[b2 = 1] (mod m2)" 727 by blast 728 let ?x = "u1 * b1 + u2 * b2" 729 have "[?x = u1 * 1 + u2 * 0] (mod m1)" 730 apply (rule cong_add_int) 731 apply (rule cong_scalar2_int) 732 apply (rule \<open>[b1 = 1] (mod m1)\<close>) 733 apply (rule cong_scalar2_int) 734 apply (rule \<open>[b2 = 0] (mod m1)\<close>) 735 done 736 then have "[?x = u1] (mod m1)" by simp 737 have "[?x = u1 * 0 + u2 * 1] (mod m2)" 738 apply (rule cong_add_int) 739 apply (rule cong_scalar2_int) 740 apply (rule \<open>[b1 = 0] (mod m2)\<close>) 741 apply (rule cong_scalar2_int) 742 apply (rule \<open>[b2 = 1] (mod m2)\<close>) 743 done 744 then have "[?x = u2] (mod m2)" by simp 745 with \<open>[?x = u1] (mod m1)\<close> show ?thesis 746 by blast 747 qed 748 749 lemma cong_modulus_mult_nat: "[x = y] (mod m * n) \<Longrightarrow> [x = y] (mod m)" 750 for x y :: nat 751 apply (cases "y \<le> x") 752 apply (simp add: cong_altdef_nat) 753 apply (erule dvd_mult_left) 754 apply (rule cong_sym_nat) 755 apply (subst (asm) cong_sym_eq_nat) 756 apply (simp add: cong_altdef_nat) 757 apply (erule dvd_mult_left) 758 done 759 760 lemma cong_modulus_mult_int: "[x = y] (mod m * n) \<Longrightarrow> [x = y] (mod m)" 761 for x y :: int 762 apply (simp add: cong_altdef_int) 763 apply (erule dvd_mult_left) 764 done 765 766 lemma cong_less_modulus_unique_nat: "[x = y] (mod m) \<Longrightarrow> x < m \<Longrightarrow> y < m \<Longrightarrow> x = y" 767 for x y :: nat 768 by (simp add: cong_nat_def) 769 770 lemma binary_chinese_remainder_unique_nat: 771 fixes m1 m2 :: nat 772 assumes a: "coprime m1 m2" 773 and nz: "m1 \<noteq> 0" "m2 \<noteq> 0" 774 shows "\<exists>!x. x < m1 * m2 \<and> [x = u1] (mod m1) \<and> [x = u2] (mod m2)" 775 proof - 776 from binary_chinese_remainder_nat [OF a] obtain y 777 where "[y = u1] (mod m1)" and "[y = u2] (mod m2)" 778 by blast 779 let ?x = "y mod (m1 * m2)" 780 from nz have less: "?x < m1 * m2" 781 by auto 782 have 1: "[?x = u1] (mod m1)" 783 apply (rule cong_trans_nat) 784 prefer 2 785 apply (rule \<open>[y = u1] (mod m1)\<close>) 786 apply (rule cong_modulus_mult_nat) 787 apply (rule cong_mod_nat) 788 using nz apply auto 789 done 790 have 2: "[?x = u2] (mod m2)" 791 apply (rule cong_trans_nat) 792 prefer 2 793 apply (rule \<open>[y = u2] (mod m2)\<close>) 794 apply (subst mult.commute) 795 apply (rule cong_modulus_mult_nat) 796 apply (rule cong_mod_nat) 797 using nz apply auto 798 done 799 have "\<forall>z. z < m1 * m2 \<and> [z = u1] (mod m1) \<and> [z = u2] (mod m2) \<longrightarrow> z = ?x" 800 proof clarify 801 fix z 802 assume "z < m1 * m2" 803 assume "[z = u1] (mod m1)" and "[z = u2] (mod m2)" 804 have "[?x = z] (mod m1)" 805 apply (rule cong_trans_nat) 806 apply (rule \<open>[?x = u1] (mod m1)\<close>) 807 apply (rule cong_sym_nat) 808 apply (rule \<open>[z = u1] (mod m1)\<close>) 809 done 810 moreover have "[?x = z] (mod m2)" 811 apply (rule cong_trans_nat) 812 apply (rule \<open>[?x = u2] (mod m2)\<close>) 813 apply (rule cong_sym_nat) 814 apply (rule \<open>[z = u2] (mod m2)\<close>) 815 done 816 ultimately have "[?x = z] (mod m1 * m2)" 817 by (auto intro: coprime_cong_mult_nat a) 818 with \<open>z < m1 * m2\<close> \<open>?x < m1 * m2\<close> show "z = ?x" 819 apply (intro cong_less_modulus_unique_nat) 820 apply (auto, erule cong_sym_nat) 821 done 822 qed 823 with less 1 2 show ?thesis by auto 824 qed 825 826 lemma chinese_remainder_aux_nat: 827 fixes A :: "'a set" 828 and m :: "'a \<Rightarrow> nat" 829 assumes fin: "finite A" 830 and cop: "\<forall>i \<in> A. (\<forall>j \<in> A. i \<noteq> j \<longrightarrow> coprime (m i) (m j))" 831 shows "\<exists>b. (\<forall>i \<in> A. [b i = 1] (mod m i) \<and> [b i = 0] (mod (\<Prod>j \<in> A - {i}. m j)))" 832 proof (rule finite_set_choice, rule fin, rule ballI) 833 fix i 834 assume "i \<in> A" 835 with cop have "coprime (\<Prod>j \<in> A - {i}. m j) (m i)" 836 by (intro prod_coprime) auto 837 then have "\<exists>x. [(\<Prod>j \<in> A - {i}. m j) * x = 1] (mod m i)" 838 by (elim cong_solve_coprime_nat) 839 then obtain x where "[(\<Prod>j \<in> A - {i}. m j) * x = 1] (mod m i)" 840 by auto 841 moreover have "[(\<Prod>j \<in> A - {i}. m j) * x = 0] (mod (\<Prod>j \<in> A - {i}. m j))" 842 by (subst mult.commute, rule cong_mult_self_nat) 843 ultimately show "\<exists>a. [a = 1] (mod m i) \<and> [a = 0] (mod prod m (A - {i}))" 844 by blast 845 qed 846 847 lemma chinese_remainder_nat: 848 fixes A :: "'a set" 849 and m :: "'a \<Rightarrow> nat" 850 and u :: "'a \<Rightarrow> nat" 851 assumes fin: "finite A" 852 and cop: "\<forall>i \<in> A. \<forall>j \<in> A. i \<noteq> j \<longrightarrow> coprime (m i) (m j)" 853 shows "\<exists>x. \<forall>i \<in> A. [x = u i] (mod m i)" 854 proof - 855 from chinese_remainder_aux_nat [OF fin cop] 856 obtain b where b: "\<forall>i \<in> A. [b i = 1] (mod m i) \<and> [b i = 0] (mod (\<Prod>j \<in> A - {i}. m j))" 857 by blast 858 let ?x = "\<Sum>i\<in>A. (u i) * (b i)" 859 show ?thesis 860 proof (rule exI, clarify) 861 fix i 862 assume a: "i \<in> A" 863 show "[?x = u i] (mod m i)" 864 proof - 865 from fin a have "?x = (\<Sum>j \<in> {i}. u j * b j) + (\<Sum>j \<in> A - {i}. u j * b j)" 866 by (subst sum.union_disjoint [symmetric]) (auto intro: sum.cong) 867 then have "[?x = u i * b i + (\<Sum>j \<in> A - {i}. u j * b j)] (mod m i)" 868 by auto 869 also have "[u i * b i + (\<Sum>j \<in> A - {i}. u j * b j) = 870 u i * 1 + (\<Sum>j \<in> A - {i}. u j * 0)] (mod m i)" 871 apply (rule cong_add_nat) 872 apply (rule cong_scalar2_nat) 873 using b a apply blast 874 apply (rule cong_sum_nat) 875 apply (rule cong_scalar2_nat) 876 using b apply auto 877 apply (rule cong_dvd_modulus_nat) 878 apply (drule (1) bspec) 879 apply (erule conjE) 880 apply assumption 881 apply rule 882 using fin a apply auto 883 done 884 finally show ?thesis 885 by simp 886 qed 887 qed 888 qed 889 890 lemma coprime_cong_prod_nat [rule_format]: "finite A \<Longrightarrow> 891 (\<forall>i\<in>A. (\<forall>j\<in>A. i \<noteq> j \<longrightarrow> coprime (m i) (m j))) \<longrightarrow> 892 (\<forall>i\<in>A. [(x::nat) = y] (mod m i)) \<longrightarrow> 893 [x = y] (mod (\<Prod>i\<in>A. m i))" 894 apply (induct set: finite) 895 apply auto 896 apply (metis One_nat_def coprime_cong_mult_nat gcd.commute prod_coprime) 897 done 898 899 lemma chinese_remainder_unique_nat: 900 fixes A :: "'a set" 901 and m :: "'a \<Rightarrow> nat" 902 and u :: "'a \<Rightarrow> nat" 903 assumes fin: "finite A" 904 and nz: "\<forall>i\<in>A. m i \<noteq> 0" 905 and cop: "\<forall>i\<in>A. \<forall>j\<in>A. i \<noteq> j \<longrightarrow> coprime (m i) (m j)" 906 shows "\<exists>!x. x < (\<Prod>i\<in>A. m i) \<and> (\<forall>i\<in>A. [x = u i] (mod m i))" 907 proof - 908 from chinese_remainder_nat [OF fin cop] 909 obtain y where one: "(\<forall>i\<in>A. [y = u i] (mod m i))" 910 by blast 911 let ?x = "y mod (\<Prod>i\<in>A. m i)" 912 from fin nz have prodnz: "(\<Prod>i\<in>A. m i) \<noteq> 0" 913 by auto 914 then have less: "?x < (\<Prod>i\<in>A. m i)" 915 by auto 916 have cong: "\<forall>i\<in>A. [?x = u i] (mod m i)" 917 apply auto 918 apply (rule cong_trans_nat) 919 prefer 2 920 using one apply auto 921 apply (rule cong_dvd_modulus_nat) 922 apply (rule cong_mod_nat) 923 using prodnz apply auto 924 apply rule 925 apply (rule fin) 926 apply assumption 927 done 928 have unique: "\<forall>z. z < (\<Prod>i\<in>A. m i) \<and> (\<forall>i\<in>A. [z = u i] (mod m i)) \<longrightarrow> z = ?x" 929 proof clarify 930 fix z 931 assume zless: "z < (\<Prod>i\<in>A. m i)" 932 assume zcong: "(\<forall>i\<in>A. [z = u i] (mod m i))" 933 have "\<forall>i\<in>A. [?x = z] (mod m i)" 934 apply clarify 935 apply (rule cong_trans_nat) 936 using cong apply (erule bspec) 937 apply (rule cong_sym_nat) 938 using zcong apply auto 939 done 940 with fin cop have "[?x = z] (mod (\<Prod>i\<in>A. m i))" 941 apply (intro coprime_cong_prod_nat) 942 apply auto 943 done 944 with zless less show "z = ?x" 945 apply (intro cong_less_modulus_unique_nat) 946 apply auto 947 apply (erule cong_sym_nat) 948 done 949 qed 950 from less cong unique show ?thesis 951 by blast 952 qed 953 954 end
__label__pos
0.910848
Giles Taylor Giles Taylor - 10 months ago 32 Javascript Question Removing an object from an array using one value Possibly a very obvious question from a beginner: If I have the following array... var arr = [ {id: 1, item: "something", description: "something something"}, {id: 2, item: "something else", description: "something different"}, {id: 3, item: "something more", description: "more than something"} ] ... and wanted to delete a specific object within it by calling on the id (in this case by clicking on an div given the corresponding id)... var thisItem = $(this).attr("id"); ... could I do this without using a for loop to match arr[i] and thisItem ? And if so, how? I'm going to have a big array so running a for-loop seems very heavy handed. Thanks! Answer You can use JQuery's grep arr = jQuery.grep(arr, function(value) { return value.id != id; });
__label__pos
0.515056
一道关于实例化顺序的C#面试题 最近找工作,面试了几家公司,其中有一家公司的面试题给我印象很深,不久前在博客园看过类似的题目,但这次的更复杂,题目如下: public class BaseA { public static MyTest a1 = new MyTest("a1"); public MyTest a2 = new MyTest("a2"); static BaseA() { MyTest a3 = new MyTest("a3"); } public BaseA() { MyTest a4 = new MyTest("a4"); } public virtual void MyFun() { MyTest a5 = new MyTest("a5"); } } public class BaseB : BaseA { public static MyTest b1 = new MyTest("b1"); public MyTest b2 = new MyTest("b2"); static BaseB() { MyTest b3 = new MyTest("b3"); } public BaseB() { MyTest b4 = new MyTest("b4"); } public new void MyFun() { MyTest b5 = new MyTest("b5"); } } static class Program { static void Main() { BaseB baseb = new BaseB(); baseb.MyFun(); } } public class MyTest { public MyTest(string info) { Console.WriteLine(info); } } 最后的问题是:请写出Main()方法中,a1-a5,b1-b5这十个类实例化的顺序。(MyTest类是我自己添的,方便查看结果,原题是是实例化一个object类。) 不知道园子里有多少人能胸有成竹的写出正确答案,反正我是答错了,正确答案是: b1 b3 b2 a1 a3 a2 a4 b4 b5 题目中涉及到的知识点 虽然题目没做对了,但要知道自己为什么会做错,这样才会有所提高,趁着端午的假期,我把这个面试题涉及到的知识点都梳理了一遍,要点如下: 1. 内联(inline)方式初始化字段。 2. 类型构造器(静态构造函数)的执行时间。 3. C#中基类和子类实例化的顺序。 4. new修饰符的作用。 内联方式初始化字段 这个知识点在《CLR via C#》书中有讲到,所谓内联方式,就是初始化字段的一种简化语法。来看示例代码: public class SomeType { public int m_x = 5; } 这种在类中声明变量时进行赋值的方式就叫做内联,大致等效于下面的代码: public class SomeType { public int m_x; public SomeType() { m_x = 5; } }   之所以说“大致等效”,因为两者的执行顺序上略有差异,编译器会首先生成内联方式的代码,然后再调用构造函数。 比如,下面的代码,最后m_x的结果就为10。 public class SomeType { //先执行 public int m_x=5; public SomeType() { //后执行 m_x = 10; } } 类型构造器的执行 所谓类型构造器也就是我们熟知的静态构造方法,在我们编写的类中,都会有一个默认的静态无参构造方法,跟无参实例构造方法一样是默认存在的。 每当我们对一个类创建第一个实例或访问静态字段前,JIT编译器就会调用该类的静态构造方法。当然,静态变量也可以使用上面说的内联方法进行赋值。 这里可以看出,当第一次实例化某个类时,会首先调用该类的静态构造方法。 C#中基类和子类实例化的顺序 这个知识点比较简单,那就是在调用子类实例构造方法之前会调用基类的实例构造方法。从面试题的结果可以看出,基类的构造方法又比子类的静态构造函数晚一些,此处因个人能力有限,我也没办法从更底层的角度去分析原理,只能暂且记住吧。 new修饰符的作用 我看过不少关于new以修饰符的形式用在方法声明中的题目,关于new的用法在MSDN上也都查的到,官方说法是“显式隐藏从基类继承的成员”。 我个人的理解比较简单:当子类中,一个方法的签名(指参数,方法名,返回值)与基类的一个方法相同,通过加入new修饰符,可以让子类不做更改的去使用该方法。 说到底,new修饰符就是让两个不相关的同名方法同时存在而已。(这里同名指相同的方法签名) 最后 回头想想,其实在我日常的项目开发中,多多少少都有涉及到这几个问题,只是平时不注意,没有深究吃透,写此文也是勉励自己以后要重视基础,不再浮躁。 最后祝大家端午节快乐! ^_^ posted @ 2011-06-05 23:03  许两会  阅读(3828)  评论(22编辑  收藏  举报
__label__pos
0.986502
metamail secure server.exe Process name: Metamail Campaign Server Application using this process: Metamail metamail secure server.exe Process name: Metamail Campaign Server Application using this process: Metamail metamail secure server.exe Click here to run a scan if you are experiencing issues with this process. Process name: Metamail Campaign Server Application using this process: Metamail Recommended: Scan your system for invalid registry entries. What is metamail secure server.exe doing on my computer? metamail secure server.exe is a Metamail Campaign Server belonging to Metamail from Metamail Corp. Non-system processes like metamail secure server.exe originate from software you installed on your system. Since most applications store data in your system's registry, it is likely that over time your registry suffers fragmentation and accumulates invalid entries which can affect your PC's performance. It is recommended that you check your registry to identify slowdown issues. metamail secure server.exe In order to ensure your files and data are not lost, be sure to back up your files online. Using a cloud backup service will allow you to safely secure all your digital files. This will also enable you to access any of your files, at any time, on any device. Is metamail secure server.exe harmful? metamail secure server.exe has not been assigned a security rating yet. Scan your system now to identify issues with this process and services that can be safely removed. metamail secure server.exe is unrated Can I stop or remove metamail secure server.exe? Most non-system processes that are running can be stopped because they are not involved in running your operating system. Scan your system now to identify unused processes that are using up valuable resources. metamail secure server.exe is used by 'Metamail'.This is an application created by 'Metamail Corp.'. To stop metamail secure server.exe permanently uninstall 'Metamail' from your system. Uninstalling applications can leave invalid registry entries, accumulating over time. Run a free scan to find out how to optimize software and system performance. Is metamail secure server.exe CPU intensive? This process is not considered CPU intensive. However, running too many processes on your system may affect your PC’s performance. To reduce system overload, you can use the Microsoft System Configuration Utility to manually find and disable processes that launch upon start-up. Alternatively, download PC Mechanic to automatically scan and identify any PC issues. Why is metamail secure server.exe giving me errors? Process related issues are usually related to problems encountered by the application that runs it. A safe way to stop these errors is to uninstall the application and run a system scan to automatically identify any PC issues. Process Library is the unique and indispensable process listing database since 2004 Now counting 140,000 processes and 55,000 DLLs. Join and subscribe now! System Tools SpeedUpMyPC PC Mechanic Toolbox ProcessQuicklink
__label__pos
0.949477
VHDL 12 bit BCD counter Discussion in 'Hardware' started by dv09, Oct 7, 2009. 1. dv09 dv09 Joined: Oct 7, 2009 Messages: 1 Hi, I have a task I am currently working on, written in VHDL. I am to design a three-digit 12-bit incrementor, and write the VHDL code. I am wondering if this can be done by making a 12-bit input, which then sends the 3 parts (11 downto 8, 7 downto 4, 3 downto 0) to their respective 7-segments, so it can count from 0 to 999, and reset. Does anyone have an illustration or a link where an example of an increment-circuit is displayed?   dv09, Oct 7, 2009 #1 1. Advertisements Want to reply to this thread or ask your own question? It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. Similar Threads 1. =?iso-8859-1?q?Jan_R=E4ther?= 64 Bit SNMP Counter on a Cisco 2950 =?iso-8859-1?q?Jan_R=E4ther?=, Nov 3, 2003, in forum: Cisco Replies: 0 Views: 3,794 =?iso-8859-1?q?Jan_R=E4ther?= Nov 3, 2003 2. - Bobb - VistaBootPro and x64 - multiboot BCD - Bobb -, Jul 8, 2006, in forum: Windows 64bit Replies: 12 Views: 1,602 Barb Bowman Jul 9, 2006 3. shipacpoloy Replies: 0 Views: 1,791 shipacpoloy Aug 14, 2007 4. =?Utf-8?B?Q2FybG9z?= Easy BCD 1.7.1 released =?Utf-8?B?Q2FybG9z?=, Nov 13, 2007, in forum: Windows 64bit Replies: 0 Views: 1,857 =?Utf-8?B?Q2FybG9z?= Nov 13, 2007 5. becool_nikks Replies: 0 Views: 3,078 becool_nikks Mar 6, 2009 Loading...
__label__pos
0.830648
February 26, 2024 Exploring the Role of Certification in Online Gaming Exploring the Role of Certification in Online Gaming 2 1. The Importance of Certification in Online Gaming Certification plays a crucial role in the world of online gaming. With the increasing popularity and growth of the online gaming industry, it becomes imperative to ensure the safety and security of players. Certification acts as a seal of approval and helps establish trust among players and operators. When a gaming platform or software is certified, it means that it has undergone rigorous testing and met certain standards set by regulatory bodies. These standards often include fairness, security, and responsible gambling practices. Certification ensures that players are provided with a fair and enjoyable gaming experience without any compromised integrity. Visit this suggested external site and uncover fresh information and viewpoints on the subject covered in this article. We’re always seeking to enrich your learning experience with us. Learn from this related study. Exploring the Role of Certification in Online Gaming 3 2. Ensuring Fairness and Integrity One of the primary purposes of certification in online gaming is to ensure fairness. Fairness is achieved through the use of random number generators (RNGs) that determine the outcome of games. Certified gaming platforms and software undergo thorough testing to ensure that their RNGs are truly random and unbiased. In addition to fairness, certification also promotes integrity in online gaming. Certified operators adhere to strict codes of conduct and ethical practices. They are required to provide transparent and accurate information about the odds of winning, payouts, and the overall house edge. This ensures that players are well-informed and can make educated decisions while gambling online. 3. The Role of Regulatory Bodies Regulatory bodies play a crucial role in the certification process of online gaming platforms. These organizations are responsible for establishing and enforcing regulations to protect players and maintain the integrity of online gaming. They conduct thorough audits and inspections to ensure that platforms and software meet their standards before certification. The presence of regulatory bodies not only ensures the safety of players but also helps combat illegal and fraudulent activities in the online gaming industry. By requiring certification, these organizations create a barrier for unscrupulous operators, making it harder for them to deceive players or engage in unethical practices. 4. Building Trust and Confidence Certification in online gaming builds trust and confidence among players. When players see that a gaming platform or software is certified, they feel more comfortable and secure in engaging with it. It assures them that their personal and financial information will be protected, and that the games they play are fair and unbiased. This trust and confidence are vital for the continued growth and success of the online gaming industry. Players are more likely to return to certified platforms and recommend them to others, which leads to increased revenue and a positive reputation for the operators. 5. The Future of Certification in Online Gaming As the online gaming industry continues to evolve, the role of certification will become even more important. With advancements in technology and the emergence of new gaming trends, regulatory bodies will need to keep up with the changing landscape and update their certification processes accordingly. Additionally, the rise of cryptocurrencies and blockchain technology has the potential to revolutionize the certification process in online gaming. Blockchain-based certifications could provide greater transparency and immutability, ensuring that gaming platforms and software are truly fair and secure. Delve into the topic and discover new perspectives with this specially selected external content for you. Get inspired. In conclusion, certification plays a vital role in ensuring the fairness, integrity, and security of online gaming. It builds trust among players and helps combat illegal and unethical activities. As the industry continues to grow, certification will continue to evolve to meet the changing demands and challenges of online gaming. Would you like to explore more about the subject discussed in this article? Access the related posts we’ve gathered to enrich your research: Investigate this topic further Investigate this informative document Expand this Click to read more about this topic
__label__pos
0.941331
ASP.NET gives a developer a lot of loosely typed key-value collections in which to stash stash variables, depending upon persistence needs. The short list includes ViewState, Session, Application and HttpContext.Items. These collections can come in very handy when one needs to keep an object around outside of a single request or shuttle things between different bits of the http pipeline. But it comes at a price—these collections are loosely typed, just returning Objects. Moreover, there is no compile-time checking to ensure that you are requesting the right key in the right place. Errors can lead to crashes at best, and interesting data corruption at worst. But those issues can easily be avoided with a little discipline up front. You see, rather than calling these variables directly, it is very possible to wrap them in an appropriately-scoped property and have that manage access to the underlying data. For the examples we are going to use the Session, but the semantics will remain the same with just about any of the above collections. So, let’s say we have a little web store. And this web store has a shopping cart. And we are going to stash this Shopping Cart object in the session. Now, one could many instances of code like this: ShoppingCart sc=(ShoppingCart)Session[“ShoppingCart”]; If (sc==null) { sc=new ShoppingCart(); } //do stuff with ShoppingCart. Or, one a base page class somewhere, you could define something like: protected ShoppingCart Cart { get { if (Session["ShoppingCart"] == null) { Session["ShoppingCart"] = new ShoppingCart(); } return (ShoppingCart)Session["ShoppingCart"]; } } The advantages of this approach are that the calling code never even needs to worry about where/how the shopping cart is stored, or if it can be null, or how to cast it out of the underlying store. Furthermore, if, at a future date, you decide that you are better off storing this cart in, say, the viewstate, the change is very, very easy because it only needs to be updated in one place. Enjoy and don’t be afraid to kick it on DotNetKicks.com. Tags: • Anonymous off topic but, > Errors can lead to crashes at best, what do you mean by a crash though? how severe is it? the framework in question must be pretty fragile no; i mean a lack of encapsulation could cause so much damage? i’m not having a go (well, just a wee bit, but thats not the point) but maybe you could enlighten us lot who develop without this ‘state of the art’ framework from m$ best regards, the doctor. • wwb_99 Crashes mean application layer errors, not system-wide halts. You know, the kind of errors one gets all the time in loosely typed frameworks if one is not ultra careful about passing the right objects around to the right methods. • yacka What you’re saying is essentially good, but you’ve weakened the message by addressing two different issues, the loose typing of session and redundancy of code. As far as loose typing is concerned, I don’t see how your second example really improves on the first. You’re still assuming that the ‘ShoppingCart’ will be the correct type. An exception will occur if for some reason an object has been stored in the session using the same key. The code you’ve written is essentially the same, even if the latter has been separated to a base class and exposes the cart via a property. Perhaps a better example would be: protected ShoppingCart Cart { get { ShoppingCart myCart = Session["ShoppingCart"] as ShoppingCart(); if(myCart == null) { myCart = new ShoppingCart(); Session["ShoppingCart"] = myCart; } return myCart; } } This approach will always return a valid shopping cart without an exception, even if elsewhere ‘ShoppingCart’ has been used in the session for differently typed object. It’s a defensive approach that protects you against the unforseen and the loose typing of session. However none of this addresses what I think is your most important point, removing the redundancy of code. We could include the above code in each page where we need to access the cart, but that would create unnecessary maintenance overhead because we would need to update every instance of the code if anything changed. If you find you’re repeating code, find some way of isolating that code so it can be reused directly in all required circumstances. As you suggest, moving the code to a base class would be one method of achieving that. • Anonymous > Crashes mean application layer errors, not system-wide halts. well, thats not a crash then; its just your common garden error. a crash is something that brings the server down… nasty no? where do you .net developers get your terminology from? m$? god? the doctor • wwb_99 @yacka: good points. The slightly different property wrapping is very good, but I like mine because, if I (or some other dev) was dumb enough to use the wrong name, something would blow up somewhere, whereas your approach would very effectively silently eat errors. @anon: Yes, it is a garden variety application layer error. But to end users, the server catching on fire and “there was an issue accessing the page” are the same thing and best avoided and/or caught during development. • http://www.totalpda.co.uk dhtmlgod However none of this addresses what I think is your most important point, removing the redundancy of code. We could include the above code in each page where we need to access the cart, but that would create unnecessary maintenance overhead because we would need to update every instance of the code if anything changed. Not if you do what Wyatt suggested and have it in a base page class and inherit from that. • yacka dhtmlgod, I know. I was re-emphasizing his point because it was confused with all the other stuff. The issues he describes are not addressed by removing code into a base class, it’s merely a different way of writing the same thing. • yacka Wyatt, That’s a fair point, but I thought you were trying to guarantee you had the correct type. • TheDeathart wow.. you wrote a singleton serializing to a session…… Special Offer Free course! Git into it! Bonus course Introduction to Git is yours when you take up a free 14 day SitePoint Premium trial.
__label__pos
0.513214
Ejercicios Resueltos de Lógica. 1. Simplicar las siguientes proposiciones utilizando las leyes del algebra Proposicional. a) ∼ [∼ (p ∧ q) →∼ q] ∨ q Solución: ∼ [∼ (p ∧ q) →∼ q] ∨ q≡∼ [∼ (∼ (p ∧ q))∨ ∼ q] ∨ q ← por ley p → q ≡∼ p ∨ q ≡∼ [(p ∧ q)∨ ∼ q] ∨ q← por ley de la doble negaci´n ∼ (∼ p) ≡ p o ≡[∼ (p∧q)∧ ∼ (∼ q)]∨q ← por leyde De Morgan ∼ (p∨q) ≡∼ p∧ ∼ q ≡[∼ (p ∧ q) ∧ q] ∨ q ← por ley de la doble negaci´n ∼ (∼ p) ≡ p o ≡[(∼ p∨ ∼ q) ∧ q] ∨ q ← por leyde De Morgan ∼ (p ∧ q) ≡∼ p∨ ∼ q ≡ q ← ley de absorci´n (a ∧ b) ∨ b = b o b) [(∼ p ∧ q) → (r∧ ∼ r)] ∧ ∼ q Solución: [(∼ p ∧ q) → (r∧ ∼ r)] ∧ ∼ q≡ [(∼ p ∧ q) → F ] ∧ ∼ q ← por ley l´gica p∧ ∼ p = F, una contradicción o ≡ [∼ (∼ p ∧ q) ∨ F ] ∧ ∼q←ley p → q ≡∼ p ∨ q ≡ [∼ (∼ p ∧ q)] ∧ ∼ q ←ley p ∨ F ≡ p ≡ (p∨ ∼ q)∧ ∼ q ←De Morgan ∼ (p ∧ q) ≡∼ p∨ ∼ q ≡∼ q ← ley de absorci´n (a ∨ b) ∧ b ≡ b o c) Demuestre que la siguiente proposición es una Tautología [(p∨ ∼ q) ∧ q] → p Solución:[(p∨ ∼ q) ∧ q] → p≡∼ [(p∨ ∼ q) ∧ q] ∨ p ley p → q ≡∼ p ∨ q ≡ [(∼ p ∧ q)∨ ∼ q] ∨ p ← porley asociativa del ∨ ≡ (∼ p ∧ q) ∨ (∼ q ∨ p) ≡ (∼ p ∧ q)∨ ∼ (q∧ ∼ p)← por ley conmutativa del∧ ≡ (∼ p ∧ q)∨ ∼ (∼ p ∧ q)← P or ley p∨ ∼ p ≡ V ≡V ← por d) Demuestre que las siguientes proposiciones son equivalentes. I) p → (r∨ ∼ q) I) II) (q →∼ p) ∨ (∼ r →∼ p) Solución: II) p → (r∨ ∼ q) ≡ ∼ p ∨ (r∨ ∼ q)← por ley p → q ≡∼ p ∨ q Ahora simplicando la expresión II, obtenemos (q →∼ p) ∨ (∼ r →∼ p)≡ (∼ q∨ ∼ p) ∨ (∼∼ r∨ ∼ p) ≡ (∼ q∨ ∼ p) ∨ (r∨ ∼ p)← por ley distributiva ≡ ∼ p ∨ (r∨ ∼ q) 1 (p ∧ q) y 3.Demuestre [(p ∨ q) ∧ (∼ p → q)] →∼ q equivale a  Cesar no trabaja [(p ∨ q) ∧ (∼ p → q)] →∼ q≡ [(p ∨ q) ∧ (p ∨ q)] →∼ q≡(p ∨ q) →∼ q≡∼ (p ∨ q)∨ ∼ q≡∼ [(p ∨ q) ∧ q]≡∼ q siendo ∼ q : cesar notrabaja 2 . Del mismo modo que (∼ r → s) sea f also implica que ∼ r es verdadero y s es f also. 4. q : Cesar trabaja. t. conocidos los valores de p. De la falsedad de a)(∼ (p →∼ q) ∨ (∼ r → s) p∧ ∼ q)∨ ∼ q b)[(∼ r ∨ q) ∧ q] ↔ [(∼ q ∨ r) ∧ s] Solución: Que la proposición deducir el valor de verdad de (p →∼ q) ∨ (∼ r → s) → sea falsa signica que (p entonces que ∼ q) es F also y (∼ r → s) es F also (p →∼ q) sea f also implica que p es verdadero y que ∼ q es f also→ q verdadero. a)∼ b)[∼ [p ∧ (∼ q∨ ∼ p)] p ∨ (q∧ ∼ t)] ↔ {(p → q)∧ ∼ (q ∧ t)} Solución : Que (p ∧ q) sea falso signican tres casos . por lo que conviene analizar primero la sentencia dadero y caso (q → t) que es falsa en un solo caso .Verifícalo como ejercicio. En resumen tenemos los siguientes valores p verdadero. 2. r f also. f also. q.¾ Cuáles de las siguientes proposiciones son verdaderas ?. entonces es facil vericar los valores de verdad de las sentencia a) y b) que ambas son verdaderas . s f also. con estos valores tenemos a)(∼ b)[(∼ p∧ ∼ q)∨ ∼ q≡ (∼ V ∧ ∼ V )∨ ∼ V ≡ (F ∧ F ) ∨ F ≡ F f also r ∨ q) ∧ q] ↔ [(∼ q ∨ r) ∧ s]≡ [(∼ F ∨ V ) ∧ V ] ↔ [(∼ V ∨ F ) ∧ F ]≡ [(V ∨ V ) ∧ V ] ↔ [(F ∨ F ) ∧ F ]≡ V ↔ F ≡ F . Si se sabe que (q → t) son falsas . cuando q es vert falso.las expresiones simplicadas son iguales. entonces equivalentes. con q verdadero entonces (p ∧ q) falso se reduce a un solo → p f also. Considere las proposiciones que el esquema Solución: Simplicamos la expresión p : Cesar Estudia. q verdadero. 7. q ∨ p Como se puede apreciar II y IV son equivalentes. entonces p es verdadero y q es falso. r sabiendo que la proposición ∼ (p → q) ∧ (r ∨ q) Solución: es verdadera ∼ (p → q) ∧ (r ∨ q) sea verdadera signica que ∼ (p → q) es verdadero (r ∨ q) es verdadero que ∼ (p → q) sea verdadero implica que (p → q) es falso.Señalar el valor de verdad de las siguientes proposiciones: ∀x. El café es agradable a menos que se le añada azucar. q → p ≡∼ q ∨ p IV. IV. Sea I.5. De el valor de verdad de las siguientes proposiciones : (∀h ∈ H)(∃p ∈ H)(p es padre de H) (∃p ∈ H)(∀h ∈ H)(p es padre de H) Solución: Discútalo con su profesor si H es el conjunto de humanos 8. Cuales de las siguientes proposiciones son equivalentes . tiene el conectivo ∨ que la hace verdadera si 3 . El café es agradable si no añadimos azucar. II.Si añadimos azucar entonces el café es agradable. q. 3} el dominio de x. ∃y : (x2 − y 2 < 10) ∨ (x2 < y + 1) 2 2 2 II. ∼ q → p ≡ q ∨ p III. El café es con azucar o el café es agradable. 1. ∀x. A = {0. ∃y : (x > y ) Solución: I. p ∧ q II. ´ q : El caf e tiene tiene azucar ´ I. Determine el valor de verdad de p.∃x. con q falso y (r ∨ q) verdadero necesariamente r es Que y verdadero. Observe que la sentencia I . 6.? I. y . 2. III. ∀y : (x − y > −10) ∧ (x > y + 1) 2 2 III. Solución: Simbolizamos cada una de las frases deniendo p : El caf e es agradable. pues si 02 02 + 1← F also. q p Solución: p ∧ (q∨ ∼ p) p q r Solución : (p ∧ q)∨ {[(∼ p) ∨ r] ∧ ∼ q} 1 1 Ejercicios resueltos de lógica por Osvaldo Carvajal 4 q p ≈ ≈ p ≈ . 2.analizamos la componente primera (x2 − y 2 < 10). y = 0 no es verdadera Para TODO se tiene x y para TODO y. el cual hace falsa la proposi- II. 1. 2. 9. concluyendóse que III es verdadero.al menos una de las componentes es verdadera. 2 de debemos chequear el valor de verdad de(x − y 2 < 10). verdad de la otra componente es verdadera la composición completa siendo innecesario estudiar el valor de x2 < y + 1. Hay que hallar el valor de verdad de de x y al menos un valor de y Para x2 > y 2 para al menos un valor el valor x = 3. 3} 2 2 para x = 0 existe y = 0 tal que 0 − 0 < 10← V erdadero 2 2 para x = 1 existe y = 0 tal que 1 − 0 < 10← V erdadero 2 2 para x = 2 existe y = 0 tal que 2 − 0 < 10← V erdadero 2 2 para x = 3 existe y = 0 tal que 3 − 0 < 10← V erdadero por lo tanto si una de las componentes es verdaera del conectivo ∨. Represente mediante funciones Booleanas los siguientes circuitos. 3} y al menos un valor de y ∈ {0. La sentencia II tiene el conectivo ción si una de las componentes es falsa .para el valor y = 0 se tiene 32 > 02 ← V erdadero. para cada uno los valores de x ∈ {0. III. concluyendo que II es falso. veremos que la componente x2 > y + 1 x = 0. ∧ concluyendo que I es verdadera. 1.
__label__pos
0.932799
--- ../vdr-1.3.42-orig/dvbplayer.c 2006-01-08 12:39:41.000000000 +0100 +++ dvbplayer.c 2006-01-29 23:02:16.000000000 +0100 @@ -80,6 +80,8 @@ private: int length; bool hasData; cCondWait newSet; + cCondVar newDataCond; + cMutex newDataMutex; protected: void Action(void); public: @@ -88,6 +90,7 @@ public: void Clear(void); int Read(cUnbufferedFile *File, uchar *Buffer, int Length); bool Reading(void) { return buffer; } + bool WaitForDataMs(int msToWait); }; cNonBlockingFileReader::cNonBlockingFileReader(void) @@ -150,8 +153,11 @@ void cNonBlockingFileReader::Action(void int r = f->Read(buffer + length, wanted - length); if (r >= 0) { length += r; - if (!r || length == wanted) // r == 0 means EOF + if (!r || length == wanted) { // r == 0 means EOF + cMutexLock NewDataLock(&newDataMutex); hasData = true; + newDataCond.Broadcast(); + } } else if (r < 0 && FATALERRNO) { LOG_ERROR; @@ -164,6 +170,14 @@ void cNonBlockingFileReader::Action(void } } +bool cNonBlockingFileReader::WaitForDataMs(int msToWait) +{ + cMutexLock NewDataLock(&newDataMutex); + if (hasData) + return true; + return newDataCond.TimedWait(newDataMutex, msToWait); +} + // --- cDvbPlayer ------------------------------------------------------------ #define PLAYERBUFSIZE MEGABYTE(1) @@ -349,6 +363,202 @@ void cDvbPlayer::Activate(bool On) Cancel(9); } +// --- BEGIN fix for I frames ------------------------------------------- +// +// Prior to the introduction of cVideoRepacker, VDR didn't start a new +// PES packet when a new frame started. So, it was likely that the tail +// of an I frame was at the beginning of the packet which started the +// following B frame. Due to the organisation of VDR's index file, VDR +// typically didn't read the tail of the I frame and therefore caused +// softdevice plugins to not render such a frame as it was incomplete, +// e. g. when moving cutting marks. +// +// The following code tries to fix incomplete I frames for recordings +// made prior to the introdcution of cVideoRepacker, to be able to +// edit cutting marks for example with softdevice plugins like vdr-xine. +// + +#if VDRVERSNUM < 10331 + +enum ePesHeader { + phNeedMoreData = -1, + phInvalid = 0, + phMPEG1 = 1, + phMPEG2 = 2 + }; + +static ePesHeader AnalyzePesHeader(const uchar *Data, int Count, int &PesPayloadOffset, bool *ContinuationHeader = NULL) +{ + if (Count < 7) + return phNeedMoreData; // too short + + if ((Data[6] & 0xC0) == 0x80) { // MPEG 2 + if (Count < 9) + return phNeedMoreData; // too short + + PesPayloadOffset = 6 + 3 + Data[8]; + if (Count < PesPayloadOffset) + return phNeedMoreData; // too short + + if (ContinuationHeader) + *ContinuationHeader = ((Data[6] == 0x80) && !Data[7] && !Data[8]); + + return phMPEG2; // MPEG 2 + } + + // check for MPEG 1 ... + PesPayloadOffset = 6; + + // skip up to 16 stuffing bytes + for (int i = 0; i < 16; i++) { + if (Data[PesPayloadOffset] != 0xFF) + break; + + if (Count <= ++PesPayloadOffset) + return phNeedMoreData; // too short + } + + // skip STD_buffer_scale/size + if ((Data[PesPayloadOffset] & 0xC0) == 0x40) { + PesPayloadOffset += 2; + + if (Count <= PesPayloadOffset) + return phNeedMoreData; // too short + } + + if (ContinuationHeader) + *ContinuationHeader = false; + + if ((Data[PesPayloadOffset] & 0xF0) == 0x20) { + // skip PTS only + PesPayloadOffset += 5; + } + else if ((Data[PesPayloadOffset] & 0xF0) == 0x30) { + // skip PTS and DTS + PesPayloadOffset += 10; + } + else if (Data[PesPayloadOffset] == 0x0F) { + // continuation header + PesPayloadOffset++; + + if (ContinuationHeader) + *ContinuationHeader = true; + } + else + return phInvalid; // unknown + + if (Count < PesPayloadOffset) + return phNeedMoreData; // too short + + return phMPEG1; // MPEG 1 +} + +#endif + +static uchar *findStartCode(uchar *Data, int Length, int &PesPayloadOffset) +{ + uchar *limit = Data + Length; + if (AnalyzePesHeader(Data, Length, PesPayloadOffset) <= phInvalid) + return 0; // neither MPEG1 nor MPEG2 + + Data += PesPayloadOffset + 3; // move to video payload and skip 00 00 01 + while (Data < limit) { + // possible start codes that appear before/after picture data + // 00 00 01 B3: sequence header code + // 00 00 01 B8: group start code + // 00 00 01 00: picture start code + // 00 00 01 B7: sequence end code + if (0x01 == Data[-1] && (0xB3 == Data[0] || 0xB8 == Data[0] || 0x00 == Data[0] || 0xB7 == Data[0]) && 0x00 == Data[-2] && 0x00 == Data[-3]) + return Data - 3; + Data++; + } + + return 0; +} + +static void fixIFrameHead(uchar *Data, int Length) +{ + int pesPayloadOffset = 0; + uchar *p = findStartCode(Data, Length, pesPayloadOffset); + if (!p) { + esyslog("fixIframeHead: start code not found!\n"); + return; + } + + Data += pesPayloadOffset; // move to video payload + if (Data < p) + memset(Data, 0, p - Data); // zero preceeding bytes +} + +static int fixIFrameTail(uchar *Data, int Length) +{ + int pesPayloadOffset = 0; + uchar *p = findStartCode(Data, Length, pesPayloadOffset); + if (!p) { + esyslog("fixIframeTail: start code not found!\n"); + return Length; + } + + // is this PES packet required? + uchar *videoPayload = Data + pesPayloadOffset; + if (videoPayload >= p) + return 0; // no + + // adjust PES length + int lenPES = (p - Data); + Data[4] = (lenPES - 6) >> 8; + Data[5] = (lenPES - 6) & 0xFF; + + return lenPES; +} + +#define IPACKS 2048 // originally defined in remux.c + +static void fixIFrame(uchar *Data, int &Length, const int OriginalLength) +{ + int done = 0; + + while (done < Length) { + if (0x00 != Data[0] || 0x00 != Data[1] || 0x01 != Data[2]) { + esyslog("fixIFrame: PES start code not found at offset %d (data length: %d, original length: %d)!", done, Length, OriginalLength); + if (Length > OriginalLength) // roll back additional data + Length = OriginalLength; + return; + } + + int lenPES = 6 + Data[4] * 256 + Data[5]; + if (0xBA == Data[3]) { // pack header has fixed length + if (0x00 == (0xC0 & Data[4])) + lenPES = 12; // MPEG1 + else + lenPES = 14 + (Data[13] & 0x07); // MPEG2 + } + else if (0xB9 == Data[3]) // stream end has fixed length + lenPES = 4; + else if (0xE0 == (0xF0 & Data[3])) { // video packet + int todo = Length - done; + int bite = (lenPES < todo) ? lenPES : todo; + if (0 == done) // first packet + fixIFrameHead(Data, bite); + else if (done >= OriginalLength) { // last packet + Length = done + fixIFrameTail(Data, bite); + return; + } + } + else if (0 == done && 0xC0 == (0xE0 & Data[3])) { + // if the first I frame packet is an audio packet then this is a radio recording: don't touch it! + if (Length > OriginalLength) // roll back additional data + Length = OriginalLength; + return; + } + + done += lenPES; + Data += lenPES; + } +} + +// --- END fix for I frames --------------------------------------------- + void cDvbPlayer::Action(void) { uchar *b = NULL; @@ -362,10 +572,14 @@ void cDvbPlayer::Action(void) nonBlockingFileReader = new cNonBlockingFileReader; int Length = 0; bool Sleep = false; + bool WaitingForData = false; while (Running() && (NextFile() || readIndex >= 0 || ringBuffer->Available() || !DeviceFlush(100))) { if (Sleep) { - cCondWait::SleepMs(3); // this keeps the CPU load low + if (WaitingForData) + nonBlockingFileReader->WaitForDataMs(3); // this keeps the CPU load low, but reacts immediately on new data + else + cCondWait::SleepMs(3); // this keeps the CPU load low Sleep = false; } cPoller Poller; @@ -385,6 +599,7 @@ void cDvbPlayer::Action(void) if (Index >= 0) { if (!NextFile(FileNumber, FileOffset)) continue; + Length += IPACKS; // fixIFrame needs next video packet } else { // hit begin of recording: wait for device buffers to drain @@ -423,11 +638,16 @@ void cDvbPlayer::Action(void) } int r = nonBlockingFileReader->Read(replayFile, b, Length); if (r > 0) { + WaitingForData = false; + if (playMode == pmFast || (playMode == pmSlow && playDir == pdBackward)) + fixIFrame(b, r, Length - IPACKS); readFrame = new cFrame(b, -r, ftUnknown, readIndex); // hands over b to the ringBuffer b = NULL; } else if (r == 0) eof = true; + else if (r < 0 && errno == EAGAIN) + WaitingForData = true; else if (r < 0 && FATALERRNO) { LOG_ERROR; break; @@ -658,9 +878,11 @@ void cDvbPlayer::Goto(int Index, bool St int FileOffset, Length; Index = index->GetNextIFrame(Index, false, &FileNumber, &FileOffset, &Length); if (Index >= 0 && NextFile(FileNumber, FileOffset) && Still) { + Length += IPACKS; // fixIFrame needs next video packet uchar b[MAXFRAMESIZE + 4 + 5 + 4]; int r = ReadFrame(replayFile, b, Length, sizeof(b)); if (r > 0) { + fixIFrame(b, r, Length - IPACKS); if (playMode == pmPause) DevicePlay(); // append sequence end code to get the image shown immediately with softdevices
__label__pos
0.979607
Wednesday, January 27, 2010 How to retrieve XMP Data/Cuepoints using the FLVPlayback Some users may wonder how they can retrieve the XMP data using the FLVplayback Component. The best approach is based on the following article http://www.adobe.com/devnet/flash/articles/flvplayback_fplayer9u3_04.html and consist of implementing your own Custom callbacks via a custom client class The FLVPlayback component automatically creates an instance of the class fl.video.VideoPlayerClient, assigns it to the NetStream's client property, and handles all callback messages that come through the NetStream. The default class fl.video.VideoPlayerClient handles only the callbacks onMetaData() and onCuePoint(). Both of these messages generate events (MetadataEvent.METADATE_RECEIVED and MetadataEvent.CUE_POINT) so that you do not need to register a custom class to handle these callbacks. However, if you do wish to use custom callbacks, you must register a custom client class. You should extend fl.video.VideoPlayerClient to ensure that the onMetaData() and onCuePoint() handling supplied by VideoPlayer and FLVPlayback will continue to work properly. The requirements for a custom client implementation are as follows: * The constructor method must take an fl.video.VideoPlayer instance as its only parameter. * The custom class must include a ready property which should be set to true when all messages expected at the very beginning of the NetStream have been received. This step is necessary because the VideoPlayer class initially hides the video and plays the beginning of the file to access the correct dimensions and duration of the video. It then quickly rewinds the video to the beginning and unhides it. If the rewind occurs before all messages at the very beginning of the file are received by VideoPlayer, there's a chance those messages may never be received. * It is highly recommended that you declare the custom class as dynamic to avoid encountering runtime errors. Errors may appear if any callbacks are triggered that the class is not set up to handle. * It is also recommended to extend fl.video.VideoPlayerClient to ensure proper onMetaData() and onCuePoint() handling. 1) Create a new extented Metadata EVENT Copy IVPEvent.as from the source code of FLVplayback locally under your FLA project under fl\video sub folder Use the code below to register a new extended event save it in a file ExtendedMetadataEvent.as package fl.video { import flash.events.Event; /** * Flash® Player dispatches a ExtentedMetadataEvent object when the user * requests the FLV/F4V/H264 file's metadata information packet (NetStream.onXMPData) * * @langversion 3.0 * @playerversion Flash 9.0.28.0 */ public class ExtendedMetadataEvent extends fl.video.MetadataEvent { /** * Defines the value of the * type property of a xmpDataReceived event object. * * @see FLVPlayback#event:xmpDataReceived xmpDataReceived event * @eventType xmpDataReceived * @langversion 3.0 * @playerversion Flash 10.0.0.0 */ public static const XMPDATA_RECEIVED:String = "xmpDataReceived"; /** * Creates an Event object that contains information about metadata events. * Event objects are passed as parameters to event listeners. * * @param type The type of the event. Event listeners can access this information * through the inherited type property. Possible values are MetadataEvent.XMPDATA_RECEIVED. * * @param bubbles Determines whether the Event object participates in the bubbling * stage of the event flow. Event listeners can access this information through the * inherited bubbles property. * * @param cancelable Determines whether the Event object can be canceled. Event listeners can * access this information through the inherited cancelable property. * * @param info Determines the dynamic properties to add. * * @param vp Determines the index of the VideoPlayer object. * * @langversion 3.0 * @playerversion Flash 9.0.28.0 * */ public function ExtendedMetadataEvent( type:String, bubbles:Boolean=false, cancelable:Boolean=false, info:Object=null, vp:uint=0) { super(type, bubbles, cancelable); super.info = info; super.vp = vp; } /** * @private */ override public function clone():Event { return new ExtendedMetadataEvent(type, bubbles, cancelable, info, vp); } } // class ExtendedMetadataEvent } // package fl.video 2) Use the code below to register the custom client class: package { import fl.video.VideoPlayer; import fl.video.VideoPlayerClient; import fl.video.ExtendedMetadataEvent; import flash.events.Event; import flash.events.EventDispatcher; import flash.display.Loader; import flash.display.DisplayObjectContainer; import flash.utils.ByteArray; /** * NetstreamDataVideoClient subclasses VideoPlayerClient, the default * VideoPlayer.netStreamClientClass value. This way all * the metadata and cue point handling built in the * VideoPlayer works properly. * * The class is declared dynamic so if any other * messages are received that we do not support in * this class (onTextData(), other custom * messages) no runtime errors will occur. */ dynamic public class NetstreamDataVideoClient extends VideoPlayerClient { /** * This variable is set in onImageData and is used to help * determine when the ready property should be true. */ protected var gotXMPData:Boolean; protected var gotImageData:Boolean; public var XMPString:String = ""; protected var verbose:Boolean = false; /** * The constructor must take a VideoPlayer as its * only parameter. It needs to pass that argument * along to the super constructor. */ public function NetstreamDataVideoClient(vp:VideoPlayer) { super(vp); gotImageData = false; gotXMPData = false; } /** * Handling for onImageData() message * Loads the image bytes into a flash.display.Loader * and adds the image to the */ public function onImageData(info:Object):void { // Only handle onImageData() once. Any time we seek to the // start of the file, the message will call this function // again trace("onImageData"); if (gotImageData) return; var loader:Loader = new Loader(); loader.loadBytes(info.data); var parent:DisplayObjectContainer = _owner.root as DisplayObjectContainer; if (parent) { parent.addChildAt(loader, 0); } gotImageData = true; } public function onTextData(info:Object):void { trace(info); recurseTrace(info, ""); } // FOR LOCAL FILE OR HTTP public function onXMPData(info:Object):void { if (gotXMPData) return; if (verbose) trace("NetstreamDataVideoClient onXMPData"); // F4V/H264 uses data to pass raw XMP if (info.data!= undefined) { if (verbose) trace(info.data); } else { if (info.liveXML!= undefined) { // FLV uses liveXML to pass raw XMP if (verbose) trace(info.liveXML); } } gotXMPData = true; // for (var i in info) // { // trace(i + " is " + info[i]); // } // trace("####\n"); // XMP came from Adobe Media Encoder for Flash if (info.data!= undefined) { XMPString = info.data; _owner.dispatchEvent(new ExtendedMetadataEvent(ExtendedMetadataEvent.XMPDATA_RECEIVED, false, false, info)); } else { if (info.liveXML!= undefined) { XMPString = info.liveXML; _owner.dispatchEvent(new ExtendedMetadataEvent(ExtendedMetadataEvent.XMPDATA_RECEIVED, false, false, info)); } } } private function recurseTrace(info:Object, indent:String):void { for (var i:* in info) { if (typeof info[i] == "object") { trace(indent + i + ":"); recurseTrace(info[i], indent + " "); } else { trace(indent + i + " : " + info[i]); } } } /** * property that specifies whether early messages have been * received so it is OK for the player to rewind back to the * beginning. If we allow the VideoPlayer to rewind before * all messages at the very beginning of the file are received, * we may never receive them. * * The default class, VideoPlayerClient, only requires * onMetaData() to be received before rewinding. onImageData() * also appears at the beginning of the file, so we might miss * that if we do not override this property and include a check * for this data. */ override public function get ready():Boolean { return (super.ready && gotImageData); } } } IMPORTANT: Don't try to parse the XMP data in the callback as it might be causing some timing issue and you want to give the hand back to the Netstream Object ASAP. Instead fire an event back to your client event listener and make sure you will interpret the data when the video will be in a READY to seek state. 3) Receive XMP Data using FLVPlayback import NetstreamDataVideoClient; import fl.video.ExtendedMetadataEvent; import flash.events.*; import fl.video.*; VideoPlayer.netStreamClientClass = NetstreamDataVideoClient; _playback = new FLVPlayback(); _playback.width = 1280; _playback.height = 720; _playback.autoRewind = false; _playback.bufferingBarHidesAndDisablesOthers = true; _playback.visible = true; addChild(_playback); _playback.addEventListener(MetadataEvent.METADATA_RECEIVED, metadataListener); // the XMP one _playback.addEventListener(ExtendedMetadataEvent.XMPDATA_RECEIVED,xmpListener); private function xmpListener(inEvt:fl.video.ExtendedMetadataEvent):void { var XMPString:String = ""; if (this.verbose) trace("--- xmpListener ----"); if (this.verbose) { if (inEvt.info.data!= undefined) { XMPString = inEvt.info.data; trace("RECEIVED XMP DATA = " + XMPString); // ready to parse } else { if (inEvt.info.liveXML!= undefined) { XMPString = inEvt.info.liveXML; trace("RECEIVED XMP DATA = " + XMPString); // ready to parse } } } } Hope you enjoy this article ! Time to interpret the data now http://younsi.blogspot.com/2008/12/interactive-video-in-flash-using-f4v.html and store some useful XMP Data http://www.actimus.com/ Free AS3 Interactive Video FLVPlayback component for FLASH CS5 http://shop.actimus.com/product_info.php?products_id=122 No comments:  
__label__pos
0.652125
Click here to Skip to main content 12,242,000 members (47,736 online) Click here to Skip to main content Add your own alternative version Stats 176.6K views 3.3K downloads 82 bookmarked Posted A Cleanup API for Windows , 24 Aug 2006 CPOL Rate this: Please Sign up or sign in to vote. Provides a general cleanup API for Windows, ported into Win32 and COM dynamic-link libraries. Introduction When you opt to keep in private your working and internet browsing habits, you can install software tools that cleanup typical files reflecting these habits such as temporary internet files, cookies, internet history, recent document traces etc. In contrast, you can write own clean up program in a snap, using some general cleanup API, which hides details in the background. This article provides a simple cleanup API ported into a Win32 DLL and into an equivalent COM DLL. One practical example of this cleanup interface implementation is the program of automating computer cleanup tasks, which never forgets to destroy private information upon you leaving a computer. If you are interested in the how-to and testing aspects beforehand, I suggest you go to the How to Use section at once. Contents Internet Cache Cleanup Typically, all internet files browsed on the internet with IE are stored (cached) in the Temporary Internet Files folder. This speeds up future browsing. The internet cache is located at %userprofile%\Local Settings\Temporary Internet Files. If not deleted, these files provide information about our working and browsing habits. At first glance, the cleanup of internet cache, in particular (and computer cleanup, in general), might seem straightforward because, indeed, we only need to empty (or delete) some specific folders or files. Let's recap, however, some typical pitfalls. Typical Pitfalls Pitfall #1: Use of Internet Explorer's own tools. Why cleanup programmatically if we can choose Delete Files/Cookies in Internet Options of IE? First, the Internet Options utility is misleading you a little bit. In fact, it does not cleanup all the files and folders in the Temporary Internet Files folder. Although after running cleanup, Windows Explorer shows an empty list view control, we'll see different results when trying the command prompt: dir /s “C:\Documents and Settings\I'm\Local Settings\Temporary Internet Files” The picture shows that after the IE cleanup, the Content.IE5 folder and the index.dat file still exist. Why is it that the Windows Explorer shows an empty list in opposition to this? Because, Temporary Internet Files, as you are likely aware, is a special folder (I'll touch this subject later) and Windows Explorer provides a special UI for special folders. By the way, you can see the full content of the internet cache using the following trick. • Open Windows Explorer. • Copy-and-paste into the address area: C:\Documents and Settings\I'm\Local Settings\Temporary Internet Files\Content.IE5, and press Enter (or place the shortcut on the desktop and provide this quoted path as the target for the shortcut). The second reason to implement a programmatic cleanup is that Windows does not provide an interface to automate cleanup tasks. Pitfall #2: Manual deletion with Windows Explorer. Trying to delete the internet cache folder in Windows Explorer with keyboard's Delete (with an idea to re-create it later on). In response, Windows prompts a message saying that: Temporary Internet Files is a Windows system folder and it is required for Windows to run properly. It cannot be deleted. This is again because the Temporary Internet Files folder is a Windows special folder (identified by CSIDL_INTERNET_CACHE and PIDL), and belongs to the shell namespace (managed by the COM-style API) rather than to the file system. In contrast to the file system folders, special folders cannot be deleted/created manually or programmatically (in general). The shell namespace is more inclusive than the file system, i.e., it contains not only the NTFS folders, and folders like My Documents, History etc., which represent physical storage, but also virtual folders such as the Printers folder, which represents links to printers which are not physical storage. All file system objects can be managed by the shell namespace, but not vice versa, because the file system, as Microsoft says, is a subset of a broader shell namespace. Pitfall #3: Command line deletion. Trying to delete the contents of the internet cache folder in the command prompt with del (or the core Win32 API). Seemingly, we can try the most direct programmatic method like this: del /s /q “C:\Documents and Settings\I'm\Local Settings\Temporary Internet Files” In the result, we'll see a lot of messages like: The process cannot access the file because it is being used by another process. As we'll see later, this is because some of internet cache files/folders are locked by another processes. Pitfall #4: Choice of windows API. As described in the MSDN documentation, the Shell API can be used to manage (including deletion) the content of special folders. In general, you translate a special folder's COM-style ID (CSIDL_INTERNET_CACHE, CSIDL_HISTORY etc.) to a folder's PIDL (shell namespace ID) with SHGetSpecialFolderLocation. Next, using PIDL, you get IShellFolder's pointer to this folder. With IShellFolder's pointer, you can un-scroll interfaces, further to enumerate entries and make deletions finally. However, when deleting entries in the History folder (IShellFolder's pointer already obtained), MSDN recommends (Q327569: "DeleteUrl() Does Not Delete the Internet Explorer History Folder Entry.") to use the IContextMenu::InvokeCommand method to delete History items, which has a disadvantage because "you cannot disable the confirmation dialog box that appears." (same article), which is unacceptable for UI-less cleanup API. Would we be more successful if we try to delete the internet cache files/folders with Win32's DeleteFile/DeleteDirectory? This works perfectly for ordinary folders: we need to enumerate files/subdirectories in the directory with FindFirstFile/FindNextFile, delete them one by one, and use recursion for deletion if a subdirectory found. In my experiments, however, running cleanup with Win32 functions corrupts the view of Windows Explorer for the internet cache folder, although the view is auto-restored (the subfolder Content.IE5 and its file index.dat are not deleted). When doing internet cache cleanup, it is important to notice that it is managed by the WinInet library, and therefore other APIs (like the Win32 core API) have no guarantee to succeed. In contrast to other libraries, WinInet provides a straightforward subset, which is designed exactly for managing the content of internet cache, which includes all logical level files stored in the Temporary Internet Files, Cookies, and History folders. Having said that, I am using for cleanup, in most cases, the WinInet API (instead of the Shell API or Win32 API). In the following sections, I'll discuss briefly every function in this API. Internet Cache's index.dat Removal Windows employs index.dat files for indexing the content of some folders (internet folders for the most part) to make subsequent accesses faster. As noticed earlier, cleanup with IE's Internet Options utility does not make full cleanup of the internet cache. In opposite to the empty content shown by Windows Explorer, the dir /s command shows that the Temporary Internet Files folder content is not actually empty, and contains the Content.IE5 folder (plus subfolders) and the index.dat file. This is because folders like Recent Documents, History, Temporary Internet Files, and some others are managed in Windows by Windows Shell, they are special folders. Why bother about index.dat files at all? In fact, unhandled index.dat files represent a personal security breach. These files contain a list of all internet places visited, and can be viewed, copied etc. For example, suppose we don't want that private information about our favorite puppies (found out with IE on the internet) be recorded permanently on our computer. We empty the internet cache (i.e., remove physically cached web pages), but leave index.dat untouched. At first glance, our privacy has been protected. In contract to this conjecture, here is the screenshot of a typical index.dat viewer program output: Indeed, index.dat provides an outlook of visited internet sites. Further, for the sake of experiment, you can try to copy an old index.dat onto a clean system, by replacing the existing index.dat, to see that the Temporary Internet Files folder is, in fact, a viewer of index.dat: no web content cached (internet browser has never been used), but the TIF folder shows many URLs (i.e., index.dat's content). Obviously, this is a loophole for malicious users who can copy the single file to see the browsing habits of a targeted person. Generally, there are three major index.dat files that keep a track of visited sites: • Internet Cache's index.dat located at %userprofile%\Local Settings\Temporary Internet Files\Content.IE5. • Internet History's index.dat located at %userprofile%\Local Settings\History\History.IE5. • Internet Cookies' index.dat located at %userprofile%\Cookies. Because the deletion of any index.dat file goes in the same manner, I'd focus on the details on the subject of deleting cache's index.dat. Apparently, Win32's DeleteFile can be used to delete index.dat programmatically. Let's try, however, to delete index.dat manually with the equivalent del command. We'd see the infamous problem that index.dat cannot be deleted because: The process cannot access the file because it is being used by another process. I've seen a lot of questions in newsgroups regarding the problem of how to delete this file, and why it cannot be deleted in a regular way. In fact, this is a typical situation when file/folder handles are used by another process. Commonly, we need to release these file handles by killing the relevant process. What processes obstruct us to delete? There is Mark Russinovich's handle utility, which shows what running Windows processes have open handles on a particular file or folder. The following picture shows the output of this utility (when IE is running): Notably, it is better to provide a full quoted path for the handle argument, otherwise the utility is prone to output irrelevant processes. It is immediately clear that Windows Explorer (explorer.exe) and Internet Explorer (iexplore.exe) have open handles on index.dat, obstructing us to delete this file manually with del. Obviously, one should kill the explorer.exe and iexplore.exe processes and then use del again. Notice that other processes might have open handles on index.dat, but if this were the case, the handle utility would show them on your computer. In particular, if Windows Messenger, which comes with SP2, was ever used on Windows XP, the Messenger's process msmsgs.exe is auto-started, by default, on every computer startup. The msmsgs.exe also locks index.dat files. Of course, the list of processes that might lock index.dat files is not exhaustive. For example, Sony VAIO notebooks, which come with pre-installed Sony software, run additional process(es) that lock index.dat files by default. The resulting manual solution to delete index.dat is the following: • Run Windows Task Manager and kill explorer.exe and iexplore.exe. • In Windows Task Manager, launch the command prompt, navigate to the index.dat location, and delete it with the del command. • Run in the command prompt explorer.exe to restore shell. In the same manner, index.dat can be deleted manually, starting Windows in safe mode, because in safe mode, processes and services, which might lock this file, are not yet loaded. Delete_IECache Function So far, I've concentrated on some pitfalls that occur while emptying the IE cache, and, particularly, described manual steps to delete index.dat. What about a programmatic solution? Leave the index.dat alone, the deletion of internet cache is pretty easy and can be programmed with WinInet functions. How can we delete index.dat programmatically? Ostensibly, Win32's DeleteFile can be used in a proper time frame during Windows startup when processes/services have not yet opened handles on this file, or the open handles should be released forcefully, and DeleteFile used in the same way. For this matter, the cleanup API provides the function Delete_IECache (see also the next section) to delete index.dat that you can use by setting the bDeleteCacheIndex parameter to TRUE (FALSE, by default) if index.dat is not locked: BOOL Delete_IECache(BOOL bDeleteCache = TRUE, BOOL bDeleteCacheIndex = FALSE); More importantly, the bDeleteCacheIndex = TRUE switch can be used if you are sure that other processes do not have open handles on that file. Otherwise, the index.dat deletion silently fails. The code below represents the Delete_IECache's function body implemented with WinInet functions. The code is the programmatic equivalent of Delete Files in the Internet Options of IE, and enables you to delete the cache's physical content (HTML pages, pictures, scripts etc.) and its index.dat file. Basically, all the work is done by three functions FindFirstUrlCacheEntry, FindNextUrlCacheEntry, and DeleteUrlCacheEntry. We obtain a handle to the internet cache hCacheEnumHandle with FindFirstUrlCacheEntry, and having this handle obtains pointers to the next cache entries with FindNextUrlCacheEntry, deleting them with DeleteUrlCacheEntry. One thing to note is that we should adjust the size of the LPINTERNET_CACHE_ENTRY_INFO structure before making a call to DeleteUrlCacheEntry. This is simple because the correct size is returned in the last parameter (in/out parameter) of FindNextUrlCacheEntry. __declspec( dllexport ) BOOL Delete_IECache(bDeleteCache = TRUE, BOOL bDeleteCacheIndex = FALSE) { TCHAR szUserProfile[200]; TCHAR szFilePath[200]; HANDLE hCacheEnumHandle = NULL; LPINTERNET_CACHE_ENTRY_INFO lpCacheEntry = NULL; DWORD dwSize = 4096; // initial buffer size // Delete index.dat if requested. // Be sure that index.dat is not locked. if (bDeleteCacheIndex) { // Retrieve from environment user profile path. ExpandEnvironmentStrings("%userprofile%", szUserProfile, sizeof(szUserProfile)); wsprintf(szFilePath, "%s%s", szUserProfile, "\\Local Settings\\Temporary Internet" " Files\\Content.IE5\\index.dat"); DeleteFile(szFilePath); if (!bDeleteCache) return TRUE; } // Enable initial buffer size for cache entry structure. lpCacheEntry = (LPINTERNET_CACHE_ENTRY_INFO) new char[dwSize]; lpCacheEntry->dwStructSize = dwSize; // URL search pattern (1st parameter) // options are: NULL ("*.*"), "cookie:" // or "visited:". hCacheEnumHandle = FindFirstUrlCacheEntry(NULL /* in */ , lpCacheEntry /* out */, &dwSize /* in, out */); // First, obtain handle to internet cache // with FindFirstUrlCacheEntry // for later use with FindNextUrlCacheEntry. if (hCacheEnumHandle != NULL) { // When cache entry is not a cookie, delete entry. if (!(lpCacheEntry->CacheEntryType & COOKIE_CACHE_ENTRY)) { DeleteUrlCacheEntry(lpCacheEntry->lpszSourceUrlName); } } else { switch (GetLastError()) { case ERROR_INSUFFICIENT_BUFFER: lpCacheEntry = (LPINTERNET_CACHE_ENTRY_INFO) new char[dwSize]; lpCacheEntry->dwStructSize = dwSize; // Repeat first step search with adjusted buffer, exit if not // found again (in practice one buffer's size adustment is // always OK). hCacheEnumHandle = FindFirstUrlCacheEntry(NULL, lpCacheEntry, &dwSize); if (hCacheEnumHandle != NULL) { // When cache entry is not a cookie, delete entry. if (!(lpCacheEntry->CacheEntryType & COOKIE_CACHE_ENTRY)) { DeleteUrlCacheEntry(lpCacheEntry->lpszSourceUrlName); } break; } else { // FindFirstUrlCacheEntry fails again, return. return FALSE; } default: FindCloseUrlCache(hCacheEnumHandle); return FALSE; } } // Next, use hCacheEnumHandle obtained // from the previous step to delete // subsequent items of cache. do { // Notice that return values of FindNextUrlCacheEntry (BOOL) and // FindFirstUrlCacheEntry (HANDLE) are different. if (FindNextUrlCacheEntry(hCacheEnumHandle, lpCacheEntry, &dwSize)) { // When cache entry is not a cookie, delete entry. if (!(lpCacheEntry->CacheEntryType & COOKIE_CACHE_ENTRY)) { DeleteUrlCacheEntry(lpCacheEntry->lpszSourceUrlName); } } else { switch(GetLastError()) { case ERROR_INSUFFICIENT_BUFFER: lpCacheEntry = (LPINTERNET_CACHE_ENTRY_INFO) new char[dwSize]; lpCacheEntry->dwStructSize = dwSize; // Repeat next step search with adjusted buffer, exit if // error comes up again ((in practice one buffer's size // adustment is always OK). if (FindNextUrlCacheEntry(hCacheEnumHandle, lpCacheEntry, &dwSize)) { // When cache entry is not a cookie, delete entry. if (!(lpCacheEntry->CacheEntryType & COOKIE_CACHE_ENTRY)) { DeleteUrlCacheEntry( lpCacheEntry->lpszSourceUrlName); } break; } else { // FindFirstUrlCacheEntry fails again, return. FindCloseUrlCache(hCacheEnumHandle); return FALSE; } break; case ERROR_NO_MORE_ITEMS: FindCloseUrlCache(hCacheEnumHandle); return TRUE; default: FindCloseUrlCache(hCacheEnumHandle); return FALSE; } } } while (TRUE); } An example of the cleanup API implementation is the Synaptex 4S Lock 1.08 program, which uses the Delete_IECache function in its cleanup module to delete index.dat in a startup time frame or during the shutdown to automate cleanup. The following picture shows a cleanup interface of 4S Lock 1.08, which can be downloaded (4.7 MB) from here: Internet Cookies Cleanup Cookies store user-specific information such as names and passwords used to access web pages, and, in more general terms, are frequently used to store web page customization information. Though they enable us not to re-enter our own credentials, a private data stored in them can represent a security loophole. In fact, cookies are part of internet cache located at the %userprofile%\Local Settings\Temporary Internet Files folder, although Windows stores them physically in a separate location at %userprofile%\Cookies. Because the internet cache is managed by the WinInet library, FindFirstUrlCacheEntry/FindNextUrlCacheEntry locates cookies as well, and they can be deleted with DeleteUrlCacheEntry as other cache entries. To access previously stored cookies faster, Windows indexes cookie files in Cookies's index.dat file located at %userprofile%\Cookies. Even if cookies are deleted, the index.dat usually contains a list of cookies. Accordingly, the cleanup API provides the Delete_IECookies function, where bDeleteCookies and bDeleteCookiesIndex should be set depending on the user's choice. Be sure that Cookies' index.dat is not locked before setting bDeleteCookiesIndex to TRUE. BOOL Delete_IECookies(BOOL bDeleteCookies = TRUE, BOOL bDeleteCookiesIndex = FALSE); The code for the Delete_IECookies function is almost identical to Delete_IECache, but while enumerating cookies, I use "cookie:" (instead of NULL) as the first parameter for the FindFirstUrlCacheEntry function. Internet History Cleanup The Internet History is the list of previously visited internet sites reflecting your personal internet browsing history, which you can see by clicking the "History" button in the Internet Explorer. Internet History is located at %userprofile%\Local Settings\History and has the hidden History.IE5 subfolder containing History's index.dat. Like Internet Cache's content, the full content of the History folder is better seen under the command prompt, which shows hidden folders and files (History.IE5 and index.dat). Trying to empty the History folder, including History.IE5 and index.dat, causes the problems we've seen when playing with the deletion of TIF (Temporary Internet Files) in the previous sections. In general, hidden History content is locked by running processes, and cannot be deleted until the locks are released (manually or programmatically) or deletion occurs in a proper time frame when History's objects are not yet locked. As before, trying to delete Internet History with Internet Options/Clear History of IE, del /s, or Win32's DeleteFile/DeleteDirectory (see pitfalls #1-4 of the previous section) brings up incomplete results. Here is the code to cleanup internet history, including History's index.dat removal. Notice that you should be sure that index.dat is not locked before setting bDeleteIndex to TRUE. __declspec( dllexport ) HRESULT Delete_IEHistory(BOOL bDeleteHistory = TRUE, BOOL bDeleteHistoryIndex = FALSE) { TCHAR szUserProfile[200]; TCHAR szFilePath[200]; HRESULT hr; // Delete index.dat if requested. // Be sure that index.dat is not locked. if (bDeleteHistoryIndex) { // Retrieve from environment user profile path. ExpandEnvironmentStrings("%userprofile%", szUserProfile, sizeof(szUserProfile)); wsprintf(szFilePath, "%s%s", szUserProfile, "\\Local Settings\\History" "\\History.IE5\\index.dat"); DeleteFile(szFilePath); if (!bDeleteHistoryIndex) return S_OK; } CoInitialize(NULL); IUrlHistoryStg2* pUrlHistoryStg2 = NULL; hr = CoCreateInstance(CLSID_CUrlHistory, NULL, CLSCTX_INPROC, IID_IUrlHistoryStg2, (void**)&pUrlHistoryStg2); if (SUCCEEDED(hr)) { hr = pUrlHistoryStg2->ClearHistory(); pUrlHistoryStg2->Release(); } CoUninitialize(); return hr; } IE's Address Bar History Cleanup URLs previously typed in the address bar of IE are known as typed URLs, and are visible by expanding the address bar's drop down list. They are recorded to registry as url1, url2, url3 etc., at HKCU\Software\Microsoft\Internet Explorer\TypedURLs: Their deletion is a simple matter of enumerating the entries and removing them one by one: __declspec( dllexport ) void Delete_IEAddressBarHistory() { HKEY hKey; DWORD dwResult; TCHAR szValueName[10]; int i; // Open TypedURLs key. dwResult = RegOpenKey(HKEY_CURRENT_USER, "Software\\Microsoft\\Internet Explorer\\TypedURLs", &hKey ); i = 1; wsprintf(szValueName, "url%d", i); while (RegDeleteValue(hKey, szValueName) == ERROR_SUCCESS) { i++; wsprintf(szValueName, "url%d", i); } RegCloseKey(hKey); } Desktop's Recent Documents Cleanup This list tracks the recently opened documents, such as Word, Excel, Notepad documents etc., and is accessible from the Start/My Recent Documents menu. All cleanup work is done by the single Shell command: __declspec( dllexport ) void Delete_DesktopRecentDocsHistory() { SHAddToRecentDocs(SHARD_PATH, NULL /* NULL clears history */); } Desktop's Run History Cleanup The commands previously typed at Start/Run ... are visible by expanding its dropdown control. They are known as RunMRU list (MRU stands for the Most Recently Used), and is recorded using consecutive alphabets for the registry's value name as a, b, c etc., at HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\RunMRU: The command that is typed wrongly is not recorded. The RunMRU list is limited to 26 entries. If the 27th command name is going to be recorded, it replaces the value data for the top value name ("a") and so on. In contrast to the method of recording typed URL in the IE's address bar, the RunMRU list has a MRUList value name, which is formed by concatenating value names of the RunMRU list and can be used for enumerating and cleaning the list like this: __declspec( dllexport ) void Delete_DesktopRunHistory() { HKEY hKey; DWORD dwResult; TCHAR szValueName[10]; string c, s; s = "abcdefghijklmnopqrstuvwxyz"; // Open RunMRU key. dwResult = RegOpenKey(HKEY_CURRENT_USER, "Software\\Microsoft\\Windows\\" "CurrentVersion\\Explorer\\RunMRU", &hKey ); for (int i = 0; i < 26 /* z */; i++) { c = s.at(i); wsprintf(szValueName, "%s", c.c_str()); RegDeleteValue(hKey, szValueName); } RegDeleteValue(hKey, _T("MRUList")); RegCloseKey(hKey); } Desktop's Recycle Bin Unless you hold SHIFT key while deleting files, they are placed into the Recycle Bin to provide easy restore option. You might want to automate emptying the Recycle Bin with a single Shell cleanup function like this: __declspec( dllexport ) void Delete_DesktopRecycleBinContents() { SHEmptyRecycleBin(NULL, NULL, SHERB_NOCONFIRMATION | SHERB_NOPROGRESSUI | SHERB_NOSOUND); } How to Use The cleanup API for Windows represents a set of functions ported into a Win32 DLL and a COM DLL, named cleanup.dll for both DLL types, that are provided as downloadable projects (I used VS.NET 2003 as the IDE). DLL projects should be built as release builds prior to building test clients. Though the COM cleanup API interface is slightly different from Win32 interface due to COM-style requirements, both APIs are identical inside and use only standard Win32 functions (no C run-time libraries). Because of this, cleanup.dll relies on the libraries available not only on WindowsXP or Windows 2003, but also on the systems dating back to Windows 95 with IE installed. In particular, Win32's DLL employs standard kernel32.dll, user32.dll, shell32.dll, ole32.dll, advapi32.dll, and wininet.dll. Which one of the DLL to use, therefore, is a matter of personal preference and style. In a C# or COM application, the COM's cleanup.dll would be preferable; in MFC or a compact Win32 application, you might prefer Win32's cleanup.dll. The following table provides the cleanup API function prototypes for the Win32 DLL and the descriptions. The important point is that when deleting index.dat files (TIF's, Cookies's, and History's), it is the responsibility of your application to make sure that these files are not locked. If the index.dat file is locked, the corresponding functions would silently and harmlessly fail. An example of an application that checks the lock status of index.dat files before using cleanup API functions is 4S Lock 1.08's cleanup module mentioned before.   Function Prototype Description #1 BOOL Delete_IECache(BOOL bDeleteCache = TRUE, BOOL bDeleteCacheIndex = FALSE) All internet files browsed on the internet with IE are stored (cached) in the Temporary Internet Files folder. This speeds up future browsing. To empty the internet cache, but leave its index.dat intact, use: Delete_IECache(); Delete_IECache can also be used to delete cache's index.dat (set bDeleteCacheIndex = TRUE) if you are sure this file is not locked. #2 BOOL Delete_IECookies(BOOL bDeleteCookies = TRUE, BOOL bDeleteCookiesIndex = FALSE) Cookies store user-specific web page customization information. They enable a user not to re-enter his credentials such such as names and passwords. So, private data stored in them can represent a security loophole. To empty internet cookies, but leave its index.dat intact, use: Delete_IECookies(); Delete_IECookies can be used to delete cookies's index.dat (set bDeleteCookiesIndex = TRUE) if you are sure this file is not locked. #3 HRESULT Delete_IEHistory(BOOL bDeleteHistory = TRUE, BOOL bDeleteHistoryIndex = FALSE) Internet browsing history reflects a user's browsing habits, and is visible by clicking IE's History button. To empty internet history, but leave its index.dat intact, use: Delete_IEHistory(); Delete_IEHistory can be used to delete history's index.dat (set bDeleteCookiesIndex = TRUE) if you are sure this file is not locked. #4 void Delete_IEAddressBarHistory() Delete_IEAddressBarHistory() can be used to empty the list of previously entered internet addresses reflecting a user's browsing habits, which is visible by expanding the address bar's drop-down control in your Internet Explorer. #5 void Delete_DesktopRecentDocsHistory() Delete_DesktopRecentDocsHistory() function can be used to empty the list of recently opened documents, such as Word, Excel, Notepad documents etc., which is accessible from Start/My Recent Documents menu. #6 void Delete_DesktopRunHistory() Delete_DesktopRunHistory() function can be used to empty personal command history, which reflects commands entered at Start/Run. The list is visible by expanding a drop-down control at Start/Run. #7 void Delete_DesktopRecycleBinContents() Unless the SHIFT key pressed while deleting files, they are placed into the Recycle Bin to provide easy restore option. The Delete_DesktopRecycleBinContents() function can be used to empty Recycle Bin contents. Finally, you would find in the article's downloads, three test applications (besides two projects for Win32 and COM cleanup.dll): • MFC Test Client for Win32 Cleanup API, which tests Win32's cleanup.dll. • C# Test Client for COM Cleanup API, which tests COM's cleanup.dll. • MFC Test Client for COM Cleanup API, which tests COM's cleanup.dll. All these testing applications have identical interfaces. For example, the interface for MFC application for testing Win32's cleanup.dll is the following: Conclusion This article provides a general cleanup API for Windows, test applications, and complete source code. The cleanup API can be used on any Windows system, dating back to Windows 95, provided that IE is installed (or wininet.dll is available). The cleanup API is packaged into Win32 and COM DLLs, which are identical internally, and can be easily used in your custom application (C++, MFC, Win32, C#, ASP.NET etc.) provided that you mention this article in accordance with regular CodeProject terms of use. Hopefully, information and tools provided with this article make one feel safer in any environment. Contact me You can reach Marcel Lambert at [email protected]. License This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Share About the Author Marcel Lambert United States United States No Biography provided You may also be interested in... Comments and Discussions   GeneralWindows NT Command Script Pin bailsoft8-Apr-07 13:13 memberbailsoft8-Apr-07 13:13  I may be missing the point, but the easiest way to clean all things IE is to save the following to a Windows NT Command Script file: @echo on cd %homedrive%%homepath% rd /s/q locals~1\tempor~1 rd /s/q locals~1\temp\tempor~1 rd /s/q cookies rd /s/q temp\cookies rd /s/q locals~1\history rd /s/q locals~1\temp\history rd /s/q recent md recent rd /s/q locals~1\temp md locals~1\temp Save the file as 'CleanIE.cmd' and place the file in your 'Start Menu\Programs\Startup' folder; works a treat everytime! Mad | :mad: Why did Microsoft have to make this so difficult? Bailsoft General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin    Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | Advertise | Privacy | Terms of Use | Mobile Web01 | 2.8.160426.1 | Last Updated 24 Aug 2006 Article Copyright 2006 by Marcel Lambert Everything else Copyright © CodeProject, 1999-2016 Layout: fixed | fluid
__label__pos
0.551307
Running engine sql files in spring boot application Hi everyone, I copied all sql files from engine jar to my spring boot application resources and disabled schema-update camunda.bpm.database.schema-update: false. I use liquibase to run sql scripts. When I run the service, process engine runs before sql files are executed by liquibase and I get following error: caused by: org.camunda.bpm.engine.ProcessEngineException: ENGINE-03057 There are no Camunda tables in the database. Hint: Set <property name=“databaseSchemaUpdate” to value=“true” Is there a way to ensure liquibase scripts run before engine starts? Thank you Hi, I had a similar problem with flyway and while I no longer have access to the codebase of that project to check the particular solution, I recall managing it by introducing some dependencies between/ordering of flyway config and process engine config beans. Maybe someone has a better suggestion though. 1 Like
__label__pos
0.895857
Definition of Subgroup 1. Noun. A distinct and often subordinate group within a group. Generic synonyms: Group, Grouping Specialized synonyms: Bench 2. Noun. (mathematics) a subset (that is not empty) of a mathematical group. Category relationships: Math, Mathematics, Maths Generic synonyms: Group, Mathematical Group Definition of Subgroup 1. n. A subdivision of a group, as of animals. Definition of Subgroup 1. Noun. A group within a larger group; a group whose members are some, but not all, of the members of a larger group. ¹ 2. Noun. (group theory) A subset ''H'' of a group ''G'' that is itself a group and has the same binary operation as ''G''. ¹ 3. Verb. To divide or classify into subgroups ¹ ¹ Source: wiktionary.com Definition of Subgroup 1. to divide into smaller groups [v -ED, -ING, -S] Subgroup Pictures Click the following link to bring up a new window with an automated collection of images related to the term: Subgroup Images Lexicographical Neighbors of Subgroup subgranted subgrantee subgrantees subgranting subgrants subgranular subgraph subgraphs subgraywacke subgraywackes subgreywacke subgreywackes subgrid subgrids subgross subgroup (current term) subgrouped subgrouping subgroupings subgroupoid subgroupoids subgroups subgum subgums subhadronic subhallucinogenic subhalo subhaloes subhalos subharmonic Literary usage of Subgroup Below you will find example usage of this term as found in modern and/or classical literature: 1. The Encyclopedia Americana: A Library of Universal Knowledge (1918) "These substitutions form a subgroup of G called the group belonging to 0. On the other hand, let H be a subgroup of G. Any rational function 0(ri, . ..." 2. An Introduction to the Theory of Groups of Finite Order by Harold Hilton (1908) "But r = l (mod p), and hence 8 = 1 (mod p). Therefore the proof by induction holds good. Ex. 2. A subgroup H of order p&+l normal in G contains a subgroup ..." 3. Theory and Applications of Finite Groups by George Abram Miller, Hans Frederick Blichfeldt, Leonard Eugene Dickson (1916) "If a 0-subgroup of the group G involves, a non-invariant subgroup or a non-invariant operator, this subgroup or operator cannot be transformed into all its ..." 4. Theory and Applications of Finite Groups by George Abram Miller, Hans Frederick Blichfeldt, Leonard Eugene Dickson (1916) "Moreover, there is at least one maximal subgroup of G which does not include any given one of the independent generators of a particular set of independent ..." 5. Primitive Groups by William Albert Manning (1921) "If H is invariant in a subgroup of G of order ah (a > 1) , H fixes ... If H is not an invariant subgroup of a subgroup of G, H displaces n — 1 letters. ..." 6. Guide to the Boris I. Nicolaevsky Collection in the Hoover Institution Archives by Anna M. Bourguina, Michael Jakobson (1989) "The work was carried out by Michael Jakobson, who established subgroups 247-297 in accordance with modern archival principles and arranged each subgroup ..." 7. The Americana: A Universal Reference Library, Comprising the Arts and ...by Frederick Converse Beach, George Edwin Rines by Frederick Converse Beach, George Edwin Rines (1912) "These substitutions form a subgroup of G called the group belonging to <j>. ... main R" for which the group of (i) is the palest invariant subgroup of G ..." Other Resources Relating to: Subgroup Search for Subgroup on Dictionary.com!Search for Subgroup on Thesaurus.com!Search for Subgroup on Google!Search for Subgroup on Wikipedia! Search
__label__pos
0.845776
Links Comment on page 0513. 找树左下角的值 题目地址(513. 找树左下角的值) 题目描述 给定一个二叉树,在树的最后一行找到最左边的值。 示例 1: 输入: 2 / \ 1 3 输出: 1 示例 2: 输入: 1 / \ 2 3 / / \ 4 5 6 / 7 输出: 7 BFS 思路 其实问题本身就告诉你怎么做了 在树的最后一行找到最左边的值。 问题再分解一下 • 找到树的最后一行 • 找到那一行的第一个节点 不用层序遍历简直对不起这个问题,这里贴一下层序遍历的流程 令curLevel为第一层节点也就是root节点 定义nextLevel为下层节点 遍历node in curLevel, nextLevel.push(node.left) nextLevel.push(node.right) 令curLevel = nextLevel, 重复以上流程直到curLevel为空 代码 • 代码支持:JS,Python,Java,CPP, Go, PHP JS Code: var findBottomLeftValue = function (root) { let curLevel = [root]; let res = root.val; while (curLevel.length) { let nextLevel = []; for (let i = 0; i < curLevel.length; i++) { curLevel[i].left && nextLevel.push(curLevel[i].left); curLevel[i].right && nextLevel.push(curLevel[i].right); } res = curLevel[0].val; curLevel = nextLevel; } return res; }; Python Code: class Solution(object): def findBottomLeftValue(self, root): queue = collections.deque() queue.append(root) while queue: length = len(queue) res = queue[0].val for _ in range(length): cur = queue.popleft() if cur.left: queue.append(cur.left) if cur.right: queue.append(cur.right) return res Java: class Solution { Map<Integer,Integer> map = new HashMap<>(); int maxLevel = 0; public int findBottomLeftValue(TreeNode root) { if (root == null) return 0; LinkedList<TreeNode> deque = new LinkedList<>(); deque.add(root); int res = 0; while(!deque.isEmpty()) { int size = deque.size(); for (int i = 0; i < size; i++) { TreeNode node = deque.pollFirst(); if (i == 0) { res = node.val; } if (node.left != null)deque.addLast(node.left); if (node.right != null)deque.addLast(node.right); } } return res; } } CPP: class Solution { public: int findBottomLeftValue_bfs(TreeNode* root) { queue<TreeNode*> q; TreeNode* ans = NULL; q.push(root); while (!q.empty()) { ans = q.front(); int size = q.size(); while (size--) { TreeNode* cur = q.front(); q.pop(); if (cur->left ) q.push(cur->left); if (cur->right) q.push(cur->right); } } return ans->val; } } Go Code: /** * Definition for a binary tree node. * type TreeNode struct { * Val int * Left *TreeNode * Right *TreeNode * } */ func findBottomLeftValue(root *TreeNode) int { res := root.Val curLevel := []*TreeNode{root} // 一层层遍历 for len(curLevel) > 0 { res = curLevel[0].Val var nextLevel []*TreeNode for _, node := range curLevel { if node.Left != nil { nextLevel = append(nextLevel, node.Left) } if node.Right != nil { nextLevel = append(nextLevel, node.Right) } } curLevel = nextLevel } return res } PHP Code: /** * Definition for a binary tree node. * class TreeNode { * public $val = null; * public $left = null; * public $right = null; * function __construct($value) { $this->val = $value; } * } */ class Solution { /** * @param TreeNode $root * @return Integer */ function findBottomLeftValue($root) { $curLevel = [$root]; $res = $root->val; while (count($curLevel)) { $nextLevel = []; $res = $curLevel[0]->val; foreach ($curLevel as $node) { if ($node->left) $nextLevel[] = $node->left; if ($node->right) $nextLevel[] = $node->right; } $curLevel = $nextLevel; } return $res; } } 复杂度分析 • 时间复杂度:$O(N)$,其中 N 为树的节点数。 • 空间复杂度:$O(Q)$,其中 Q 为队列长度,最坏的情况是满二叉树,此时和 N 同阶,其中 N 为树的节点总数 DFS 思路 树的最后一行找到最左边的值,转化一下就是找第一个出现的深度最大的节点,这里用先序遍历去做,其实中序遍历也可以,只需要保证左节点在右节点前被处理即可。 具体算法为,先序遍历 root,维护一个最大深度的变量,记录每个节点的深度,如果当前节点深度比最大深度要大,则更新最大深度和结果项。 代码 代码支持:JS,Python,Java,CPP JS Code: function findBottomLeftValue(root) { let maxDepth = 0; let res = root.val; dfs(root.left, 0); dfs(root.right, 0); return res; function dfs(cur, depth) { if (!cur) { return; } const curDepth = depth + 1; if (curDepth > maxDepth) { maxDepth = curDepth; res = cur.val; } dfs(cur.left, curDepth); dfs(cur.right, curDepth); } } Python Code: class Solution(object): def __init__(self): self.res = 0 self.max_level = 0 def findBottomLeftValue(self, root): self.res = root.val def dfs(root, level): if not root: return if level > self.max_level: self.res = root.val self.max_level = level dfs(root.left, level + 1) dfs(root.right, level + 1) dfs(root, 0) return self.res Java Code: class Solution { int max = 0; Map<Integer,Integer> map = new HashMap<>(); public int findBottomLeftValue(TreeNode root) { if (root == null) return 0; dfs(root,0); return map.get(max); } void dfs (TreeNode node,int level){ if (node == null){ return; } int curLevel = level+1; dfs(node.left,curLevel); if (curLevel > max && !map.containsKey(curLevel)){ map.put(curLevel,node.val); max = curLevel; } dfs(node.right,curLevel); } } CPP: class Solution { public: int res; int max_depth = 0; void findBottomLeftValue_core(TreeNode* root, int depth) { if (root->left || root->right) { if (root->left) findBottomLeftValue_core(root->left, depth + 1); if (root->right) findBottomLeftValue_core(root->right, depth + 1); } else { if (depth > max_depth) { res = root->val; max_depth = depth; } } } int findBottomLeftValue(TreeNode* root) { findBottomLeftValue_core(root, 1); return res; } }; 复杂度分析 • 时间复杂度:$O(N)$,其中 N 为树的节点总数。 • 空间复杂度:$O(h)$,其中 h 为树的高度。
__label__pos
0.999869
web-dev-qa-db-ja.com Android SDKのバッテリー Android SDKからバッテリー情報を取得する方法はありますか?バッテリーの残り時間などなど?)ドキュメントで見つけることができません。 27 Jeremy Edwards ACTION_BATTERY_CHANGEDのブロードキャストを受信するようにインテントレシーバーを登録できます: http://developer.Android.com/reference/Android/content/Intent.html#ACTION_BATTERY_CHANGED 。ブロードキャストはスティッキーであるとドキュメントは言っているので、バッテリーの状態が変化した瞬間でもそれをつかむことができます。 23 Jarett Millard バッテリーの使用量、バッテリー電圧、およびその温度を取得する簡単な例を次に示します。 次のコードをアクティビティに貼り付けます。 _@Override public void onCreate() { BroadcastReceiver batteryReceiver = new BroadcastReceiver() { int scale = -1; int level = -1; int voltage = -1; int temp = -1; @Override public void onReceive(Context context, Intent intent) { level = intent.getIntExtra(BatteryManager.EXTRA_LEVEL, -1); scale = intent.getIntExtra(BatteryManager.EXTRA_SCALE, -1); temp = intent.getIntExtra(BatteryManager.EXTRA_TEMPERATURE, -1); voltage = intent.getIntExtra(BatteryManager.EXTRA_VOLTAGE, -1); Log.e("BatteryManager", "level is "+level+"/"+scale+", temp is "+temp+", voltage is "+voltage); } }; IntentFilter filter = new IntentFilter(Intent.ACTION_BATTERY_CHANGED); registerReceiver(batteryReceiver, filter); } _ 私の電話では、10秒ごとに次の出力が表示されます。 ERROR/BatteryManager(795): level is 40/100 temp is 320, voltage is 3848 つまり、これは、バッテリーが40%充電され、温度が32.0℃、電圧が3.848ボルトであることを意味します。 67 plowman public static String batteryLevel(Context context) { Intent intent = context.registerReceiver(null, new IntentFilter(Intent.ACTION_BATTERY_CHANGED)); int level = intent.getIntExtra(BatteryManager.EXTRA_LEVEL, 0); int scale = intent.getIntExtra(BatteryManager.EXTRA_SCALE, 100); int percent = (level*100)/scale; return String.valueOf(percent) + "%"; } 2 XXX バッテリーにモニターを付けて、レベルとステータスをチェックする必要がありました。私はMonoForAndroidで開発しているので、これが私が思いついたものです。誰かが同様の要件を持っている場合に備えて、ここに配置します。 (私はこれをテストし、うまく動作します)。 try { var ifilter = new IntentFilter(Intent.ActionBatteryChanged); Intent batteryStatusIntent = Application.Context.RegisterReceiver(null, ifilter); var batteryChangedArgs = new AndroidBatteryStateEventArgs(batteryStatusIntent); _Battery.Level = batteryChangedArgs.Level; _Battery.Status = batteryChangedArgs.BatteryStatus; } catch (Exception exception) { ExceptionHandler.HandleException(exception, "BatteryState.Update"); throw new BatteryUpdateException(); } namespace Leopard.Mobile.Hal.Battery { public class AndroidBatteryStateEventArgs : EventArgs { public AndroidBatteryStateEventArgs(Intent intent) { Level = intent.GetIntExtra(BatteryManager.ExtraLevel, 0); Scale = intent.GetIntExtra(BatteryManager.ExtraScale, -1); var status = intent.GetIntExtra(BatteryManager.ExtraStatus, -1); BatteryStatus = GetBatteryStatus(status); } public int Level { get; set; } public int Scale { get; set; } public BatteryStatus BatteryStatus { get; set; } private BatteryStatus GetBatteryStatus(int status) { var result = BatteryStatus.Unknown; if (Enum.IsDefined(typeof(BatteryStatus), status)) { result = (BatteryStatus)status; } return result; } } } #region Internals public class AndroidBattery { public AndroidBattery(int level, BatteryStatus status) { Level = level; Status = status; } public int Level { get; set; } public BatteryStatus Status { get; set; } } public class BatteryUpdateException : Exception { } #endregion 0 Has AlTaiar
__label__pos
0.735738
Blitz is in beta! 🎉 1.0 expected in May or June Back to Documentation Menu HTTP Middleware Topics Jump to a Topic HTTP Middleware is the escape hatch for queries and mutations to access HTTP primitives. Middleware can then pass data to queries and mutations using the middleware res.blitzCtx parameter. Middleware should not be used for business logic, because HTTP middleware is less composable and harder to test than queries and mutations. All business logic should live in queries & mutations. For example, authentication middleware can set and read cookies and pass the session object to queries and mutations via the context object. Then queries and mutations will use the session object to perform authorization and whatever else they need to do. Global Middleware Global middleware runs for every Blitz query and mutation. It is defined in blitz.config.js using the middleware key. // blitz.config.js module.exports = { middleware: [ (req, res, next) => { res.blitzCtx.referer = req.headers.referer return next() }, ], Global Ctx Type You can access the type of the res.blitzCtx parameter by importing Ctx from blitz. The default Ctx provided by Blitz is an empty object. You can extend the Ctx type by placing the following file in your project (this is included in new apps by default). // types.ts import {DefaultCtx, SessionContext} from "blitz" declare module "blitz" { export interface Ctx extends DefaultCtx { session: SessionContext } } Whatever types you add to Ctx add will be automatically available in your queries and mutations as shown here. import {Ctx} from "blitz" export default async function getThing(input, ctx: Ctx) { // Properly typed ctx.session } Local Middleware Local middleware only runs for a single query or mutation. It is defined by exporting a middleware array from a query or mutation. // app/products/queries/getProduct.tsx import {Middleware} from "blitz" import db, {FindOneProjectArgs} from "db" type GetProjectInput = { where: FindOneProjectArgs["where"] } export const middleware: Middleware[] = [ async (req, res, next) => { res.blitzCtx.referer = req.headers.referer await next() if (req.method !== "HEAD") { console.log("[Middleware] Loaded product:", res.blitzResult) } }, ] export default async function getProject( {where}: GetProjectInput, ctx: Record<any, unknown> = {}, ) { console.log("Referer:", ctx.referer) return await db.project.findOne({where}) } Middleware API This API is essentially the same as connect/Express middleware but with an asynchronous next() function like Koa. The main difference when writing connect vs Blitz middleware is you must return a promise from Blitz middleware by doing return next() or await next(). See below for the connectMiddleware() adapter for existing connect middleware. import {Middleware} from "blitz" const middleware: Middleware = async (req, res, next) => { res.blitzCtx.referer = req.headers.referer await next() console.log("Query/middleware result:", res.blitzResult) } Arguments • req: MiddlewareRequest • An instance of http.IncomingMessage, plus the following: • req.cookies - An object containing the cookies sent by the request. Defaults to {} • req.query - An object containing the query string. Defaults to {} • req.body - An object containing the body parsed by content-type, or null if no body was sent • res: MiddlewareResponse • An instance of http.ServerResponse, plus the following: • res.blitzCtx - An object that is passed as the second argument to queries and mutations. This is how middleware communicates with the rest of your app • res.blitzResult - The returned result from a query or mutation. To read from this, you must first await next() • res.status(code) - A function to set the status code. code must be a valid HTTP status code • res.json(json) - Sends a JSON response. json must be a valid JSON object • res.send(body) - Sends the HTTP response. body can be a string, an object or a Buffer • next(): MiddlewareNext • Required: Callback for continuing the middleware chain. You must call this inside your middleware • Required: Returns a promise so you must either await it or return it like return next(). The promise resolves once all subsequent middleware has completed (including the Blitz query or mutation). Communicating Between Middleware and Query/Mutation Resolvers From Middleware to Resolvers Middleware can pass anything, including data, objects, and functions, to Blitz queries and mutations by adding to res.blitzCtx. At runtime, res.blitzCtx is automatically passed to the query/mutation handler as the second argument. 1. You must add to res.blitzCtx before calling next() 2. Keep in mind other middleware can also modify res.blitzCtx, so be sure to program defensively because anything can happen. From Resolvers to Middleware There are two ways middleware can receive data, objects, and functions from resolvers: 1. res.blitzResult will contain the exact result returned from the query/mutation resolver. But you must first await next() before reading this. await next() will resolve once all subsequent middleware is run, including the Blitz internal middleware that runs the query/mutation resolver. 2. You can pass a callback into the resolver via res.blitzCtx and then the resolver can pass data back to the middleware be calling ctx.someCallback(someData) Error Handling Short Circuit the Request Normally this is the type of error you expect from middleware. Because middleware should not have business logic, so you won't be throwing authorization errors for example. Usually, an error in middleware means something is really wrong with the incoming request. There are two ways to short circuit the request: 1. throw an error from inside your middleware 2. Pass an Error object to next like next(error) Handle via Normal Query/Mutation Flow Another uncommon way to handle a middleware error is to pass it to the query/mutation and then throw the error from there. // In middleware res.blitzCtx.error = new Error() // In query/mutation if (ctx.error) throw ctx.error Connect/Express Compatibility Blitz provides a connectMiddleware() function that converts connect middleware to Blitz middleware. // blitz.config.js const {connectMiddleware} = require("blitz") const Cors = require("cors") const cors = Cors({ methods: ["GET", "POST", "HEAD", "OPTIONS"], }) module.exports = { middleware: [connectMiddleware(cors)], } Arguments • middleware - Any connect/express middleware. More Information If you'd like more details on the RPC API, see the RPC API documentation Idea for improving this page? Edit it on GitHub.
__label__pos
0.93814
1. Ebrahim Mohammadi 2. prayertimes 3. Issues Issue #1 new Automated test against PrayTimes.js Ebrahim Mohammadi repo owner created an issue It would be great if we could give some random test inputs to both PrayTimes.js and prayertimes.hpp and compare the results, all automatically. Comments (2) 1. digitalsurgeon I have PrayTimes.js v2.3 running inside a Qt based C++ app. I can go ahead and do this, since I need it for my app, as a way to verify that your code conforms with the results of praytimes.js 2. Log in to comment
__label__pos
0.566234
wozziwozziwozzi wozziwozziwozzi - 2 years ago 240 LaTeX Question HTML/CSS/Javscript LaTeX algorithm for word spacing I would like to know if there is a way to integrate the LaTeX word spacing algorithm in a website? I'm not talking about equations. To better illustrate what I mean, look at thr picture. Above is a normal justify and below is a valuable word spacing with LaTeX. justify vs. LaTeX Any ideas? Answer Source Yes, there is, using JavaScript: https://github.com/bramstein/typeset Whether you want to use it in production is a different question. Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
__label__pos
0.880038
Unleash Your Creativity with SVGs - Twitter Posts Made 🎨 Absolutely! You can definitely use SVG files in your Twitter posts. SVG (Scalable Vector Graphics) files are a versatile and widely supported file format that can enhance your tweets with high-quality graphics and icons. In this answer, I'll guide you through the process of using SVG files on Twitter, including how to edit them and convert them to other formats if needed. First, let's talk about the benefits of using SVG files on Twitter. SVG files are resolution-independent, which means they can be scaled to any size without losing quality. This is particularly useful on Twitter, where images are displayed in various sizes across different devices. Whether your tweet is viewed on a desktop computer, a tablet, or a smartphone, your SVG graphics will always look crisp and sharp. To use an SVG file in your Twitter post, follow these simple steps: 1. Create or find an SVG file: You can create your own SVG file using graphic design software like Adobe Illustrator or Inkscape, or you can find free SVG files online. NiceSVG offers a vast library of free SVG files that you can download and use for your Twitter posts. Creating and Finding SVG Files ActionTools/PlatformsBenefitsTips Create SVGAdobe Illustrator, InkscapeComplete control over design, Unique artworkUse layers for complex designs 📝 Find SVGNiceSVGAccess to vast library, Time-saving, Free to useAlways check license before use 📝 Edit SVGAdobe Illustrator, InkscapeCustomization, Personal touchKeep original file for reference 📝 Use SVG for TwitterTwitterScalable, Crisp visualsOptimize file size for quick loading 📝 2. Edit the SVG file (if needed): If you want to customize the SVG file before using it on Twitter, you can easily do so. SVG files are editable, and you can use software like Adobe Illustrator or Inkscape to make changes to the colors, shapes, or text within the file. This allows you to create unique graphics that align with your brand or message. 3. Save the SVG file: Once you're happy with your edits, save the SVG file to your computer. It's important to note that Twitter only supports certain image file formats, so we'll need to convert the SVG file to a compatible format. Supported Image Formats on Twitter and Corresponding Converters Image FormatSupported on TwitterSVG to Format ConverterAdditional Notes JPEGYesNiceSVG JPEG ConverterMaintains good quality with smaller file size PNGYesNiceSVG PNG ConverterSupports transparency GIFYesNiceSVG GIF ConverterSupports animation BMPNoN/ANot supported on Twitter TIFFNoN/ANot supported on Twitter SVGNoN/ANeeds to be converted to a supported format 4. Convert the SVG file (if needed): If your SVG file is not already in a compatible format, you can use an online SVG to PNG converter to convert it. Simply upload your SVG file to the converter, select PNG as the output format, and click the convert button. The converter will generate a PNG file that you can use on Twitter. 5. Upload the SVG or PNG file to Twitter: Now that you have your SVG or PNG file ready, you can upload it to Twitter. When composing a tweet, click on the image icon to add an image. Select the SVG or PNG file from your computer, and Twitter will upload it to your tweet. You can also drag and drop the file directly into the tweet composer. 6. Preview and post your tweet: Before posting your tweet, take a moment to preview how the SVG or PNG file looks. You can resize or reposition the image if needed. Once you're satisfied, click the "Tweet" button to share your tweet with the world. By using SVG files in your Twitter posts, you can add visual interest, showcase your creativity, and make your tweets stand out. Whether you're sharing a graphic, an icon, or an illustration, SVG files are a fantastic choice for enhancing your Twitter presence. So go ahead and start using SVG files in your Twitter posts! Get creative, experiment with different designs, and make your tweets visually appealing. And remember, NiceSVG is here to support you with a wide range of free SVG files and converters to help you make the most of SVG on Twitter. Happy tweeting! Samantha Clarke Graphic Design, SVG Files, Digital Art, Teaching Samantha Clarke is a seasoned graphic designer with over 15 years of experience in the industry. She has a deep understanding of SVG files and their applications in various design projects. Samantha is passionate about sharing her knowledge and helping others master the use of SVG files.
__label__pos
0.533316
What is Composite Number? You are currently viewing What is Composite Number? Composite Number? what is composite number What is a composite number? A composite integer is a positive integer that is formed by multiplying two smaller positive numbers. Unlike the usual case, where only one of the numbers is divisible by 1, a composite has at least one divisor other than 1. It is an especially useful tool for solving problems in algebra, mathematics, and computer science. Learn more about the properties of a compound number by using this article. It will also help you learn about the different types of fractals and other interesting topics. In simple terms, a composite number is an odd-numbered number that cannot be made up of two whole numbers. For example, the integer 4 can be made up of two smaller integers x 3 and x 6. In contrast, the composite number 14 can only be made up of the numbers 2 and 3, but can’t be created by multiplying the two numbers. Regardless of how large the number is, it’s never a prime number. To determine whether a number is composite, it should pass the divisibility test. There are four common factors, which are 2, 4, 6, and eight. An even number should start with the number two, while a composite number should begin with a number. Similarly, a number that ends in a non-prime factor should begin with a number that ends in five or zero. There are many more examples of this, but the rule is the same: if a composite is an odd-numbered fractal, it will be a prime. To check whether a number is prime, it must be divisible by at least three factors. For example, if a number is divisible by two factors, it must be a prime number. The prime factors for a composite number are the two prime factors, i.e., 2, 3, and 7. However, a non-prime number can be divided by one of those primes, which is what happens with the decimal version. composite number Meaning A composite number is a number that is divisible by more than two factors. The first two are prime numbers. Those that are divisible by only one factor, or whose number is divisible by only one, are called prime numbers. But they are not both prime and composite. For instance, a 2 is a whole composite number. It is a compound number when its factors are other than one. In fact, the latter is the prime of a double-digit number. A composite number is one that is divisible by more than two factors. For example, if a number has two factors, it is a composite number. A 3 is a prime number if it cannot be divisible by two other factors. If a number is prime, then the other three factors must be prime, and vice versa. It is considered a prime if it is not a compound number. Read More  What is Stomata? composite number definition A composite number is a number that has more than one factor. It is a number that is greater than five. A 5 is a composite number. It is not prime if it is divisible by a single factor. When it is prime, it cannot be divided by any other factors. Hence, it is not a composite number. When a number is prime, it is a composite number. Its divisor is the number that is not divided by the other. The number 4 is a composite number because it is divisible by two other whole numbers. As a result, the number is a prime number if it cannot be divided by two other whole numbers. And if it is a prime, then it is not divisible by one. It is not a prime number. Then, it is a composite number. Its factors should be at least three or more. A composite number can be formed by multiplying two other whole numbers. It is the opposite of a prime number. A prime number is composed of just one factor. If the number is more than one, it is a composite. This means that it has at least two factors. Therefore, it is not a prime number. It is a composite. A perfect example of a prime is a 7-digit integer. The prime factor is a divisor of the number itself. How do you define Composite Numbers? Definition, Examples , and Facts composite numbers are numbers that have more than two elements. They can be classified according to the number of factors they possess. If a number is composed of just two factors – one as well as the actual number then it’s an integer. However, many numbers contain greater than 2 factors and are known as composite numbers. In this article, we will discover the distinction between composite numbers and prime numbers which is the smallest number of composite numbers as well as bizarre composite numbers. The last one is fascinating due to the fact that there are a variety of odd composite numbers, in contrast to 2 that is, in fact, the sole prime even number. How do you define Composite Numbers? The can be described as naturally occurring numbers that are composed of more than two elements. That is the number which can be divided by a number that is not 1 and is also a number is referred to as an symphony number. We will learn more about the concept of composite numbers by using examples. A few examples of composite number 4, 6, 8 9 10, and 4 are the first composite numbers. Let’s look at the numbers 4 and 6. In the example above the numbers 4 and 6 can be referred to as composite numbers since they are constructed by combining with other numbers. This concept is crucial and we have used it in a theorem dubbed the fundamental Theorem of Arithmetic. Let’s look at the essential characteristics that are characteristic of the composite number. Compositional Numbers and Properties The term “combined number” refers to an positive integer which can be created by multiplying two positive integers. Take note of the characteristics of a composite number in the following table: • Every composite is equally divided into smaller numbers which could be composite or prime. • A composite number can be composed from two prime numbers or greater. Read More  What is Zila Panchayat? Let’s look at the characteristics of this composite numeral to grasp this concept clearer way. This figure illustrates that when we multiply these positive integers, we obtain an aggregate number. How do I Locate Composite Numbers? To find a composite number, look for the factors of the number. If the number is composed of at least two elements it’s a composed. The most effective method to figure out the composite number is to do the test of divisibility. The divisibility test can help us determine whether an amount is either a prime or composite number. Divisibility refers to the fact that a number is totally divided (with no remaining) with another. To check this, you need to see if the given number is able to be divided using the following common factors: 2 3, 5, 7 11 13, and 2. If the number you are given is even, begin looking at the number 2. If the number has the letters 5 or 0 then take it to 5. If the number can’t be divided by one of these numbers, then it has a number that is prime. For instance, 68 is divisible by 2. This implies it has a number that are different from 1, so we can say that the number 68 is a composite one. Different types of composite numbers The two major kinds of composite numbers used in math include odd composite numbers as well as the even-numbered composite number. Let’s take a look at both of them separately: Odd Composite Numbers Any odd numbers that aren’t prime are composite numbers that are not prime. For instance 9, 15, 21 25, 27 are composite numbers that seem odd. Think about the numbers 1, 2 3, 4, 9 10 11, 12 and 15. In this case, the numbers 9 and 15 constitute odd composites due to the fact that the two numbers have divisors with odd numbers and meet the requirements of composite numbers. Even Composite numbers All odd numbers that are not prime are also even composite numbers. For instance 4, 6, 8 10, 12, 14 16, and 10, are composite numbers. Think about the numbers 1 3 9 10 11 12 and 15 more. The numbers 4, 10, and 12 are even composites due to their even divisors and meet the criteria for composite numbers. The smallest composite number A composite number can be identified as being a numerical number having divisors that are not 1 as well as that number as a whole. As we begin counting1, 2 3, 4, 5 5, 6, …. so on We can find that 1 isn’t a composite number as its sole divisor is 1. 2. It is not a composite number since it only has 2 divisors i.e. one and itself. 3 is not a compound number since it only has two divisors i.e. one and 3 . But, when we look to the Number 4, we can see that its divisors are 1 2, 3, and 4. The number 4 meets the requirements of an underlying number. Thus, 4 is the smallest number in a composite. Other Articles Read More Questions on Composite Numbers What is a Composite Number in Math? composite numbers are numbers which have more than two elements. Also the composite numbers are composed of factors that are different from 1 and themselves. For instance, the #6 is an example of a composite because it includes 1, 2 3 and 6 as its components. Read More  What is Jamdani? How do I Find Composite Numbers? To determine whether a quantity is composite all we need to do is determine the number of factors it contains. If it contains more than two factors it means that the number is considered to be composite. For instance, 4 is composed of three factors: 1, 2 and 4. This makes it an equivalence number. Does 2 count as a composite number? The number 2 isn’t an even prime number. It is the sole actually prime numbers. A composite number can be described as having more than two elements, apart from one and it’s own number. There are only two factors, one and 2. This makes 2 a prime number. What are the composite numbers between the numbers 1 and 100? The numbers that make up the composite between 1 and 100 are 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 22, 24, 25, 26, 27, 28, 30, 32, 33, 34, 35, 36, 38, 39, 40, 42, 44, 45, 46, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 60, 62, 63, 64, 65, 66, 68, 69, 70, 72, 74, 75, 76, 77, 78, 80, 81, 82, 84, 85, 86, 87, 88, 90, 91, 92, 93, 94, 95, 96, 98, 99, 100. What is a composite number and a Prime Number? A number that can be divided by a number that is not 1 and also the number itself, is known as an aggregation number. For instance, 4, six are composite numbers. While a number can be divided only by 1 is referred to as prime, for example 2, 3 and 5. What’s an Odd Composite ? All weird figures (excluding one) that aren’t prime are composite numbers that are not prime. For instance 9 and 15, are odd composite numbers. What is the smallest composite number? A composite number is composed of more than two factors apart from 1 and that number by itself. In this case, 4 meets this requirement. There is no number less than 4, which is a composite. This makes 4 the most compact composite number. What is the difference between the Prime and Composite Number? A prime number is comprised of only two elements, 1, as well as the actual number while the composite number is composed of greater than 2 factors. For instance, 5, has just two factors, 1 and 5, and it is an integer prime. In contrast, 6, has four factors: 1 2 3 and 6, which means it is a composite. What are Consecutive Composite Numbers? Consecutive numbers are composite numbers which continuously move in tandem, with no prime number between them. For instance, the initial few composite numbers that are consecutive can be described as 4, 6 8 9 10 and so on. Which Composite numbers do you have between 1 and 100? There are 74 composite numbers ranging from one to 100. They are in the following order: 4, 6, 8 9 10 12 14 15 16 18, 20, 21 22 24 25 26 27 28 30, 32, 33 34, 35 36 38, 39 40 42 44, 45 46 48, 49 50 51, 52 54 55, 56, 60, 57 and 62 64, 65 64, 65, 66, 68 70, 72, 75, 74 and 77. 80, 81 85 and 86. 90, 88 90, 92 95, 94 99, 96 100.
__label__pos
0.999821
PICKLISTCOUNT Function in Salesforce | Count Selected Values in Multi-select field in Salesforce This Salesforce tutorial will explain the PICKLISTCOUNT Function. We will also learn its syntax and use cases. In addition, we will explore practical business examples of the function in both Salesforce Lightning and Salesforce Classic. While working as a Salesforce administrator, I recently had the task of counting the number of opportunities in each stage of the sales pipeline to track sales performance. To fulfill this requirement, I decided to create the formula field where I can use the PICKLISTCOUNT function and count the multi-select picklist selected values. What is Salesforce PICKLISTCOUNT Function The Salesforce PICKLISTCOUNT Function is one of the various TEXT functions in Salesforce. It performs text operations on Salesforce fields and returns the number of selected values in a multi-select picklist. As we all know, in a multi-select picklist,, we can choose multiple values from the list of defined options. So, when the developer needs to evaluate and determine selected values within this field and count selected items, we will use the PICKLISTCOUNT function. In addition, we can use this function in various cases in Salesforce such as reporting, workflow automation, formula fields, or validation rules. Furthermore, the syntax of the PICKLISTCOUNT function is simple and easy to grasp and given as below: PICKLISTCOUNT(multiselect_picklist_field) Here, the multiselect_picklist_field parameter specifies the name of the multi-select picklist field for which we want to count the selected values. Moreover, we have to take care of one more thing the formula always results in a Number data type and is incompatible with other data types such as Time, Date, Text, Checkbox, etc. If you pass any other data type rather than a multi-select picklist to the PICKLISTCOUNT function such as text, number, date, date/time, etc. you will get an error “Incorrect parameter type for function ‘PICKLISTCOUNT()’.” Let’s see a small example of the PICKLISTCOUNT() function to better understand the concept: Suppose you have a multi-select picklist field named “Electronic Gadgets You Have.” This picklist contains the following options: Phones, Laptops, Cameras, Tablets, Headphones, iPads, Smart Watches, and Personal Computers. Let’s assume that out of all the values, you have selected the following: “Phones,” “Laptops,” “Headphones,” and “Smart Watch” from this multi-select picklist. So, if you use the PICKLISTCOUNT() function, it returns you the count of the selected values as 4 in your course, because you have chosen the four options from the multi-select picklist. See also  LEFT Function in Salesforce With this, we have learned about the Salesforce PICKLISTCOUNT function, its syntax, and one example of the function to better understand it. Next, we will move ahead and learn how to use the PICKLISTCOUNT function in the formula field in both Salesforce Lightning and Classic with examples. Read LPAD Function in Salesforce How to use PICKLISTCOUNT Function in Salesforce Lightning The following are the steps to use the Salesforce PICKLISTCOUNT Function to determine the number of selected values in a multi-select picklist field. Step 1: Click on the Gare Icon. -> Setup. Click on it. PICKLISTCOUNT Function in Salesforce Step 2: Click on the Object Manager. PICKLISTCOUNT Function in Salesforce Example Step 3: Select an Object on which you want to apply the PICKLISTCOUNT function. Here, I have selected a Product Object. PICKLISTCOUNT Function in Salesforce Lightning Step-4: Click on the Field & Relationships -> New” button to create a new field where you use the PICKLISTCOUNT function in a formula field. PICKLISTCOUNT Function in Salesforce Lightning Example Step 5: Select the formula field and click on the radio button next to the “Formula” field type. Then, to proceed to the next step, click on the “Next” button. Salesforce PICKLISTCOUNT Function Step-6: Now, enter the “Field Label” and “Field Name” and choose “Formula Return Type.” In my case, I entered the field label as “Total Price,” and when I clicked, the field name was automatically populated. In this case, you have to choose “Currency” as the formula return type by clicking on the radio button next to the currency field type. I keep the “Decimal Place” value the same as by default; if you want to change it, you can. Then, to move to the next step, click on the “Next” button. Salesforce PICKLISTCOUNT Function Example Step-7: In this step, you need to enter the formula according to your requirements. First, navigate to the right side of the page, where you see the “Functions” section. From here, click on the dropdown labeled “All Functions Categories. From this dropdown, select the “Text” category because you know that the PICKLISTCOUNT function falls under the text category of Salesforce functions. On selecting the text, you will get the list of Salesforce functions that are categorized under text functions. Now, search for the “PICKLISTCOUNT” function, select it, and click on the “Insert Selected Function” button. After the function is inserted into the advanced formula box, you can add parameters according to your specific requirements. Let’s imagine an example: Suppose you are working in a customizing gifts company where customers can customize their gift by selecting multiple options like color, size, and material. On the basis of the customization total product price is calculated. See also  Salesforce REGEX Function For selecting the customizations you have various multi-select picklist fields, so to calculate the total amount on the number of customizations selected, you decided to use the PICKLISTCOUNT function. According to the scenario, the formula is as given below: Basic_Charges__c + (PICKLISTCOUNT(Color__c) * 50) + (PICKLISTCOUNT(Size__c) * 100) + (PICKLISTCOUNT(Material__c) * 300) Here, is a detailed explanation of the formula: • First, define the basic cost or service charge of the product. • Then, use the PICKLISTCOUNT function that counts the number of colors selected by the customer. After that, we multiply it by the additional cost of 50 based on the color selected. • Next, use the PICKLISTCOUNT function that calculates the additional cost based on the number of sizes selected by the customer, and per size we multiply the size by 100. • After that, use the PICKLISTCOUNT function that calculates the additional cost based on the number of selected materials to be used in customizing the product, and as per each material take an additional amount of 300. • Last, we use the + operator to add all the results of the parts and get the total price accordingly. Step-8: After entering the formula in the subtab, click on the “Check Syntax” button to ensure that the formula is error-free. If there are no errors, you will receive a message like “No syntax errors in merge fields or function”. But if there are any errors, an error message appears showing you the issue. Even, you can enter information in the “Description” and “Help Text” fields, and configure how the field should handle empty values. After that, click on the “Next” button and move to the next step. Salesforce Lightning PICKLISTCOUNT Function Step-9: In this step, you have to set up the “Field Level Security”. Here, you will need to select the profiles for which you want to grant edit access to this field. Then, click on the “Next” button to move to the next step. Salesforce Lightning PICKLISTCOUNT Function Example Step-10: Here, we need to select which Page Layout we want to display this field. Otherwise, click the “Save” button to complete creating a formula field in Salesforce where you can use the PICKLISTCOUNT function. Count Selected Values in Multi select field in Salesforce Lightning Step-11: Once the formula field is created, you can use the formula field. Let’s see an example: • Open the “Products” item, create a new product with the “Basic Charges”, “Color”, “Size”, and “Material” and save the product. • Once you save, move to the product details page. Here you will observe the “Total Price” field that shows you the result of the PICKLISTCOUNT function. See also  FIND() Function in Salesforce Let’s deeply understand it with the example: Suppose you have created the product with the basic charge of “$200,” the color selected as “Red” or “Grey,” the size selected as “7.5” x 11.5″, and the material selected as “Leather,” and saved it. Then, the formula executes and gives the total price as “$700.” Hence, the formula effectively counts the multi-select picklist selected values and calculates the total price. Working: • Basic Charges: $200 • Color: PICKLISTCOUNT(Color__c) * 50: As two color are selected the PICKLISTCOUNT function returns the result as 2 and then multiply it by 50 and give the final result as 100. • Size: PICKLISTCOUNT(Size__c) * 100: As only one size is selected the PICKLISTCOUNT function returns the result as 1 and then multiplies it by 100 and gives the final result as 100. • Material: PICKLISTCOUNT(Material__c) * 300: As only one material is selected the PICKLISTCOUNT function returns the result as 1 and then multiplies it by 300 and gives the final result as 300. • Total Price: Basic Charges + Color + Size+ Material = 200 +100+ 100+ 300 = $700 Count Selected Values in Multi select field in Salesforce Lightning Example We have learned how to use the PICKLISTCOUNT function in Salesforce Lightning. Conclusion In a nutshell, we have learned the Salesforce PICKLISTCOUNT function, which is used to count selected values in the multi-select field in Salesforce. We have also learned the syntax and practical examples that show how the function is implemented. Moreover, we have explored the step-by-step implementation of how to count the selected values in the picklist multi-select field in Salesforce Lightning and Salesforce Classic with examples. You may like to read the following articles:
__label__pos
0.908458