url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://dbafix.com/sql-query-how-to-know-if-is-resource-intensive-i-o/
code
I am not a SQL guru, but I am troubleshooting an issue with our marketing platform as its become slugish and slow, I’ve requested the top 50 heavy/resource intensive queries executed on our sql server to correlate with our marketing workflows. Here is an example or some of the top queries, based on the average I/O would you say these queries are consuming too much resources? whats a normal acceptable IO for a query? If I take the first query execution plan, it will be as following. There is not a value that is good, or bad, for a query. It completely depends on what the query is doing. SELECT * from a table without a WHERE clause from a table with billions of rows will have a very high level of I/O. Whereas, SELECT ID from a table with a WHERE clause on the ID column and a clustered index on the ID column will have about as low a level of I/O as it’s possible to get. Neither is right or wrong and there’s not a >42 is bad kind of measure here. Instead, you have to look at the query and what it’s doing. Then, look at the execution plan to see how the query is being resolved. In this case, you have a table scan, meaning it’s a heap with no clustered index, even though there are clear WHERE clause values that could use an index to find data. So either the heap doesn’t haven any nonclustered indexes, or, the nonclustered indexes it has, don’t support the query. Does this mean there is excessive I/O going on? Yeah, probably. Additionally, you have a nonclustered index in use, skypipeline_eventid, but it’s not a covering index because then you have an RID lookup (another heap table). Is this excessive I/O? Yeah, probably. In general, the vast majority of your tables should have clustered indexes. Indications are, this database has none, or few (sample size of 2 ain’t exactly dispositive). Without clustered indexes on the column(s) that define the most commonly used path to the data, you’re doing nothing but table scans all over the place. So, yeah, you’re probably experiencing excessive I/O.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510888.64/warc/CC-MAIN-20231001105617-20231001135617-00581.warc.gz
CC-MAIN-2023-40
2,041
8
https://www.freelancer.com/projects/unity-3d/unity-pro-for-game-development/
code
We are game development company, Falcon Interactive, with HQ in London having dedicated team of 82 experts in game programming, 3D modeling/2D design, AI experts, networking, online, social gaming, cross pMore Hi, we are the team of unity3d developers with experience to develop games for mobile platforms - iOS and Android. Please see our first game which is not under NDA made for a client from Sweden - https://itunes.apple.More Hello, How are you? I have checked your project description and recognized that your project is familiar with my skills. I am very excited to work on your project and I am fully capable of giving you high qualityMore I am very interesting your project. Already I made many VR/AR apps using Unity. Once you contact me, You can see my old project urls. My target is 100% success job. Stay tuned, I'm is still working on thiMore Hi!! how are you? I've read your requirements. I 'm very interested in your project's require. I want to discuss detail in chat and i can do this project. I've got much experience in 2d and 3d game develop with uniMore I have good hand on experience with Unity 3D Game development. I'm based in china. I can help you with Unity 3D game development I would like to know more details, So I can understand what exactly projecMore Hi, I have full experience in develop Game (Unreal & Unity3D) and Mobile App ( iOS, Android) I always can provide game and mobile app that you want with high quality and rate. I want to be honest and make wonderMore Hello sir,i am ready to start work well and have done lot of games and applications in unity3d so first i will start your work and no need to pay any advance money and i will start your work here and i will give you deMore I am very interested in your project. Unity3d and c# programming are my top skill. Especially developing games is my passion because I had worked with them in my original company for 5 years. This is Evgeny Vaganov. I am an experienced game developer and worked on similar projects to what you are looking for. I am confident I can exceed your expectations. Especially I am strong at game developmMore I just checked the details of the project and this is something which my studio can help you with. Square Pixel is an experience stdio with a team of 22 in house artist with industry experience withMore
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590199.42/warc/CC-MAIN-20180718135047-20180718155047-00560.warc.gz
CC-MAIN-2018-30
2,328
32
http://www.necrosoft.nl/?page_id=320
code
Hellbox is the editor for Hell Tech Game Engine. It’s provided as a plugin for tight integration the engine. - What you see is what you get - Real-time editing - Its a plugin – means that you can do everything with the engine that the editor can - Debug webserver – you can view output and change/tweak settings via a browser - Monitor file changes and reload files if necessary (useful when editing in other programs) - Simple-to-use GUI Hellbox stand alone * Discontinued, the editor is ported as a plugin for the engine itself to provide better plugin support * Hellbox is Hell Tech‘s official editor. The editor provides tight integration with the engine is and easily expendable with custom made actors. - Tight integration with the engine - wysiwyg (What you see is what you get) - Play games inside the editor for testing and manipulation - 3D paint, paint direct onto a 3D model - Real-time shader editor - Texture viewer
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817200.22/warc/CC-MAIN-20240418061950-20240418091950-00094.warc.gz
CC-MAIN-2024-18
937
16
https://learn.logicsacademy.com/courses/loops-repeat-until-loops/lectures/3255752
code
Hide and Seek Exploration - Ask the class, "How can we change the code so that Dash looks for Dot until we press the Top Button?" - Select a student volunteer to change the code by tapping on the Repeat Until block and selecting the Top Button icon. - Then run the program so the class can see how the program has changed. - Continue selecting student volunteers to change the Repeat Until block in a similar manner. Encourage students to: - Change the Repeat Until event to Hear Clap or Object Behind. - Change the blocks inside the Repeat Until block.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00072.warc.gz
CC-MAIN-2021-17
553
7
http://www.splawforum.com/blog/blog/immigrant_resources/
code
Immigration Questionnaire, Checklist and Samples. We have created this section as useful resources which can help and provide some guidance on some important immigration questions. Feel free to share with others as long as our website is mentioned (www.SPLawForum.com) Guide To Immigrants General Immigration Petitions Employment Based Petitions Family Based Petitions
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.50/warc/CC-MAIN-20190620004625-20190620030625-00101.warc.gz
CC-MAIN-2019-26
368
6
http://www.tomshardware.com/forum/312265-33-discrete-graphics-card-soho-computer
code
I have a desktop computer with the following specifications: Pentium Dual CPU E2220 @ 2.40Ghz with 4GB RAM (3.49GB usable, Win7 32-bit). Intel GMA 3100 64MB dedicated plus 192MB shared. LCD monitor with VGA at 1920x1080 600W PSU on a miniATX case with an extra 80mm fan at the back of the case. Windows Experience Index Score of 3.3 (due to the “3D business and gaming graphics performance”, followed with a 3.8 for the “Desktop performance for Windows”). I use my computer for: Internet browsing (Internet Explorer 9—PDF, Flash, Google Earth, etc.). IM (WLM and YM). Playing MP3s and DIVX files with Windows Media Player (Seldom DVDs). Making short files with MS Word and MS Excel (2007, not going 2010. But I’ll consider the next update). I store and edit pictures taken with a basic consumer camera with Photoshop Elements (v7, not updating to v9. But I’ll consider the next update). I’ve never complained about my graphics card until I went to those website samples for the Internet Explorer 9 where you can try HTML5 see how powerful your computer is. I don’t game, but I could do an online game (in Flash, …) seldom. There’s absolutely no games installed in my computer--other than those that came with Windows 7--and I’m not planning to install any in the future (If I ever do it’ll be something simple, certainly not one of those high-demanding RPGs). I have read that the best *silent* cards that I could use are ATI Radeon HD 5450 and the ATI Radeon HD 5570. But what would I gain if I buy one of those cards for my desktop computer? Maybe, ATI or some other graphics card manufacturer will release a better performing and equally silent graphics card in the near future? The Intel Graphics Media Decelerator sucks so bad, with Flash and other accelerated content becoming available a HD5450 would be a good improvement. You'll be able to watch HD movies with it, something you undoubtedly cannot do now (likely why you rarely watch DVDs). He has the really old GMA 3000; fine for business apps, but not things like movies, video, and other typical home use. If it were GMA4500 or pretty much anything AMD, I'd agree that a discrete card might not do much, but it isn't, and even a HD4350 or HD5450 will make a difference. Particularly if a reason the OP doesn't view a lot of DVDs is because of skipped frames or other bad video performance, even a low-end discrete card will make movie-watching pleasant. I'm not suggesting a gamer-card, even from the bottom of Cleeve's monthly recommendations, but discrete video will help on this system.
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135558.82/warc/CC-MAIN-20140914011215-00198-ip-10-234-18-248.ec2.internal.warc.gz
CC-MAIN-2014-41
2,578
17
https://community.norton.com/en/forums/norton-not-protecting-me-hotmail-hacked-long-running-script-help
code
Not what you are looking for? Ask the experts! Norton not protecting me. Hotmail hacked. Long running script. Help! Could anyone please help? Some weeks ago I was unable to log in to my Hotmail email account. When I tried to log in to Hotmail, I kept getting a message saying the password was wrong. It clearly wasn't. I contacted Microsoft who told me that all the multiple attempts I made to log in, when I kept getting a message telling me the password was wrong, showed up to them, as the log in being successful. I had run Norton full scan and the Eraser and neither found anything. Microsoft then told me that it must be malware on the laptop I had used to try to log in to Hotmail. That same laptop now stalls and crashes, and I repeatedly get a message that there is a "long running script" slowing down the webpages and causing them to crash. I tried deleting Norton and tried Kapersky, which seemed to have the laptop running better for a while, but then the "long running script" problem started again. I am using Windows 8.1 and it says all the Windows updates have been done. I also use Nord VPN. At the same time as all this has been going on, there have been multiple attempts by fraudsters to contact me, for example, they have been calling both the phone number I generally give out, and another one which I use in a very restricted manner generally for family and friends only. I also had messages from my credit card provider, telling me that the police had contacted them to say that my credit card details had been found on a fraudster's list. (That was genuinely from the credit card provider, I also spoke to them on the phone). They replaced my card. Can anyone please help me? I have tried Norton and got no help from them whatsoever.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999817.30/warc/CC-MAIN-20190625092324-20190625114324-00148.warc.gz
CC-MAIN-2019-26
1,759
8
https://aniridia.org.uk/conference/?shared=email&msg=fail
code
Our conferences and events are a chance to hear from, and ask questions in person to, people interested in and affected by aniridia. Aniridia Network Conference 2019 will be held on 1st June 2019 at Birmingham Voluntary Service Council (BVSC). BVSC, The Centre for Voluntary Action Full program details coming soon. Find out about the fantastic Aniridia Network Conference 2018.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503844.68/warc/CC-MAIN-20190221091728-20190221113728-00630.warc.gz
CC-MAIN-2019-09
378
5
https://sites.tntech.edu/renfro/2007/03/08/exporting-figures-from-matlab/
code
I just discovered the WordPress.com MATLAB feed today. Frinkytown’s complaint about copying and pasting figure reminded me of things I had to do to write my M.S. thesis, and other things I discovered afterwards: - MS Word is the devil, and Equation Editor is its evil spawn. When I started writing my thesis, I had been using WordPerfect for 9 years. I cannot imagine writing a technical document without Reveal Codes or a close equivalent. As you might also have guessed, I greatly preferred the old WP method for entering equations. That is, typing out a math code for an equation rather than pointing and clicking through palettes of symbols, clicking placeholders for subscripts, superscripts, and other elements, etc. I’m a [latex]\LaTeX[/latex] geek now. Windows MetafilesWhat Word does to Windows Metafiles is a close third on deviltry. To the specific question of getting decent graphs from MATLAB to Word, I’d check the following: in a figure window, select Edit / Copy Options. Make sure Clipboard format is set to Metafile (may lose information) or Preserve information (metafile if possible) if you’re doing line graphs. Bitmaps will be ugly. Metafiles will look much better, since they’re normally a vector file format, and should be resolution-independent. However, metafiles and Word are a match made somewhere with a statistically significant difference from heaven. Never, ever edit a MATLAB-generated metafile in Word. Word will screw up your lovely vectorized metafiles even if you don’t actually make any edits. You have been warned. Example: Matlab Figure Pasted Into Word Matlab Figure After Edit Picture Menu I hate Word. “Sure, go ahead: change my font weight. Misjustify my y-axis numbers. Change my y-axis label orientation by 90 degrees. And could you also make sure I can never, ever put it back the way it was? Thanks!” The only way to avoid this problem (aside from never clicking Edit Picture) is to insert figures in a format that Word won’t try to edit. BMP files could be ok, but are bulky and have the low-resolution problem mentioned in the original post. PNG files are at least much smaller, but aren’t any higher quality. For me, that leaves Encapsulated PostScript (EPS). - Natively exported from MATLAB - Standard for “journal-quality” graphs. - Can be converted to PDF relatively easily if you have [tex]\LaTeX[/tex] around somewhere, even if it’s on a remote Unix system. - Requires a PostScript-compatible printer. No printing an final copy on my dad’s Dell all-in-one inkjet. - Takes some extra effort to get a preview in Word. I did all my M.S. figures in EPS; WordPerfect handled them fine, and we had plenty of PostScript printers at the university. I also wrote a quick-and-dirty .m file to print every open figure window consistently: function out=printall(printcmd) % printall - Print all currently open figures % % With no arguments, this prints all figures with a % regular print command. % % printall('printcmd') prints all figures with the % print command 'printcmd' % % Examples: % % printall('print -dmeta fig%d.wmf') prints all figures % to Windows metafiles % named fig1.wmf, fig2.wmf, % etc. if nargin==0 printcmd='print'; end figs=sort(get(0,'Children')); for count=1:length(figs) feval('figure',figs(count)); eval(sprintf(printcmd,figs(count))); end (Apologies for the formatting; I don’t yet have a decent syntax highlighting or code-including plugin.) This way, if I had a slew of figures that all needed the same axes limits, I could run printall('axis([0 10 -100 100])') at a MATLAB prompt and get all their axes sized consistently. A following printall('print -depsc2 fig%d.eps') would give me a fig1.eps, fig2.eps, … etc. from the current figure windows. Now that I’m a [tex]\LaTeX[/tex] geek, my current favorite for getting figures out of MATLAB is printall('pdfprint') — with this combination of MATLAB and a [tex]\LaTeX[/tex] distribution, I can generate cropped PDF files ready to reference from a .tex file. % PDFPRINT Print a figure window to an Adobe PDF file. % PDFPRINT alone sends the current figure to a PDF file named % FigureN.pdf, where N is the current figure number. % pdfprint filename.pdf % Same as above but sends the output to a file named filename.pdf warning('epstopdf command had non-zero return value'); delCommand=sprintf('cmd /c del %s',psTempFile); renameCommand=sprintf('cmd /c move %s %s',pdfTempFile,pdfName); renameCommand=sprintf('mv %s %s',pdfTempFile,pdfName); warning('rename command had non-zero return value'); warning('delete command had non-zero return value'); Ben Hinkle at Mathworks posted some helpful figure export scripts in the File Exchange in 2001, and wrote an article about exporting figures in late 2000.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00575.warc.gz
CC-MAIN-2023-14
4,759
33
https://community.morphmarket.com/t/help-ball-python-belly-burn/22359
code
Hello. I have a 200+/- ball python who developed a burn when the heat tape under his tub malfunctioned. The other two snakes sharing the rack in their tubs had normal temperatures and are okay, thankfully. I feel incredibly guilty as the malfunction happened overnight. With the mass amounts of overwhelmed vet hospitals I’m unable to get an appointment with any of the reptile/exotic vets until Monday. For treatment thus far I have soaked him in a antibacterial solution (briefly) to clean the burn area and than I moved him directly into a new tub with no substrate as it was sticking to the stomach. He is currently comfortable on paper towels. After the soak he had a small bowel movement but even his vent area is quite red. He was beginning to shed when this happened. Areas of the burn are still quite pink towards the lower tail and the section higher up has already blistered and is now red and scabbed. Are there any remedies to help in the meantime? Internet advice is, as expected, mixed. I want to clarify he WILL be seeing a vet and I am on the cancellation list for over the weekend. My primarily goal at this point is to help with pain and prevent a downturn in health. I can post photos if helpful. Betadine soaks 2-3 times a week… neosporin without pain reliever can be applied also. If it’s blistering likely you will need antibiotics from the vet. Keep as clean as possible! Poor baby, it’s good that you caught it so soon! Steelserpents has the best advice to start out with here. As weird as it sounds…it might not be too bad that you need to wait for the vet visit. The problem with burn wounds is that the damage to the deeper tissues can take a bit longer to appear. When this happens, the tissue will start to go necrotic and slough off. The vet can help debride this and get that tissue out of the way so healing can start. Depending on when the vet can see you, you may already have some of this happen before the visit, but don’t be shocked if it takes a bit longer for some of the damage to show up. This isn’t because you’re not doing enough! It’s just the nature of burn wounds and tissue damage. Hopefully if the damage worsens after the visit they can see you again quickly to clean the area. I had a rescue with a SEVERE burn wound. It took a week for the damage to really show. He did great once I got him home and started him on a soaking routine, silvadine sulfa cream and a lot of patience. He needed soaks daily to keep flushing the tissue and helping to keep things on the mend for the first two weeks. There will be a lot of shedding while they try to heal the area up, and depending on how extensive, they may refuse to eat for some time. Or if you do try to feed, I went with smaller prey to help keep the area from stretching and irritating more. Even now, 2 years later, he still eats smaller prey than I would normally give to a snake of his size because of the scar tissue and risk of him regurging the meals.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510300.41/warc/CC-MAIN-20230927135227-20230927165227-00661.warc.gz
CC-MAIN-2023-40
2,976
12
http://www.pof.com/interests/competition.aspx
code
Today is the Present, go ahead and open it Hi, My name is Steve. It turns out that using pick up lines like "If I had a dime for every time I've seen someone as fine as you, I'd have ten cents" doesn't seem to attract any quality women... Huh, Coon Rapids Minnesota What? 36 Man Seeking Women
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095373.99/warc/CC-MAIN-20150627031815-00196-ip-10-179-60-89.ec2.internal.warc.gz
CC-MAIN-2015-27
292
4
https://www.bhphotovideo.com/c/product/1401388-REG/plantronics_209746_101_blackwire_3215_c3215_ww.html
code
Designed for use in mass-deployment enterprise environments, the Blackwire 3215 USB Type-A Corded Monaural UC Headset from Plantronics is easy to install and supports UC Toolkit resources. This noise-canceling headset features a flexible boom microphone and a USB Type-A cable for connecting to most Macs and Windows PCs, as well as a detachable 3.5mm connector for mobile devices. The monaural design leaves one ear open while the other ear is covered by a leatherette-encased, on-ear speaker featuring hearing protection to guard against sounds above 118 dBA. Additionally, the Blackwire 3215 Headset comes equipped with inline controls that let you answer, ignore, end, hold, redial, and mute calls, as well as adjust the volume. The inline controls can also be used to control music and other multimedia applications. When you're finished with your call or multimedia application, the Blackwire 3215 folds flat for portability.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818711.23/warc/CC-MAIN-20240423130552-20240423160552-00470.warc.gz
CC-MAIN-2024-18
931
2
https://www.ruby-forum.com/t/webrick-very-unstable-with-mysql-5-0-on-windows/66032
code
So, install Ruby and gem and then read the wiki post about getting MySQL working and installed that mysql.so into the right dirs, and actually everything works fine. Except that it’s unstable - after maybe 4 or 5 queries, Webrick will crash with a segmentation fault in basically a random module - never the same spot. Anyone got any suggestions? Is it just the mysql.so - does it need to be compiled on this machine maybe?
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741764.35/warc/CC-MAIN-20181114082713-20181114104713-00142.warc.gz
CC-MAIN-2018-47
425
7
https://docs.devexpress.com/CoreLibraries/DevExpress.Export.Xl.XlCondFmtRuleTimePeriod._properties
code
XlCondFmtRuleTimePeriod PropertiesRepresents the “Date Occurring…” conditional formatting rule. |Formatting||Provides access to a set of formatting properties applied to cells when the status of the conditional formatting rule is true. Inherited from XlCondFmtRuleWithFormatting.| |Priority||Specifies the priority of the conditional formatting rule. Inherited from XlCondFmtRule.| |RuleType||Gets the type of the specified conditional formatting rule. Inherited from XlCondFmtRule.| |StopIfTrue||Gets or sets whether the conditional formatting rules with lower priority can be applied. Inherited from XlCondFmtRule.| |TimePeriod||Gets or sets the time period used as a formatting criterion used in the “date occurring…” conditional formatting rule.|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00069.warc.gz
CC-MAIN-2021-49
762
6
https://groups.google.com/g/comp.lang.scheme/c/2Jz2PQHAZag
code
Scheme Request for Implementation 153, by John Cowan, has gone into final status. The document and an archive of the discussion are available at Here's the abstract: Osets are immutable collections that can contain any Scheme objects as long as a total order exists among the objects. Osets enforce the constraint that no two elements can be the same in the sense of the oset's associated equality predicate. The elements in an oset appear in a fixed order determined by the comparator used to create it. Here is the commit summary since the most recent draft: - editorial changes Here are the diffs since the most recent draft: Many thanks to John and to everyone who contributed to the discussion of this SRFI.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297284704.94/warc/CC-MAIN-20240425032156-20240425062156-00082.warc.gz
CC-MAIN-2024-18
712
15
https://docslide.us/documents/making-choices-with-data-models-and-database-nyerges-gis-database-primer-gisdbpchapter1v17doc.html
code
Making choices with data models and database ?· Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc… Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Chapter 1 Data Modeling Abstract This chapter starts by differentiating data, information, evidence, and knowledge. Choosing the most appropriate way to organize data within a GIS depends on choices made about data models and database models. In this chapter we learn about the conceptual, logical and physical levels of data models and database models as a way of understanding levels in data management software design and database design. Understanding data models is important for understanding the full range of data representation options we might have when we want to design and implement a database. A database model is developed as the basis for a particular database design, and is constrained by what data model we have chosen as the basis for developing various designs. Several data models are described in terms of the geospatial (abstract) data types that form the basis of those data models. The character of geospatial data types at the same time enable and constrain the kind of representations possible in GIS. It is the basis for out ability to derive information and assemble evidence that builds geographic knowledge. The chapter concludes by listing and describing nine steps that compose a database design process. This process can be shortened or lengthened depending on the complexity of the design problem under consideration by a GIS analyst. Database development is one of the most important activities in GIS work. Data modeling, or what is commonly called database design, is a beginning step in database development. Database implementation follows data modeling for database design; that is the implementation depends (is enabled and/or constrained by) the database software used. Even database management software must be designed with some ideas about what kinds of features of the world are to be represented in a database. No database management software can implement ALL feature representations and needs. Such software would always be in design mode. Limits and constraints, i.e., general nature of GIS applications to be performed, exist for all software. To gain a better sense of data modeling for database design it is important to distinguish data from information. If there were no distinction then software would be very difficult to develop and GIS databases would not be as useful. In this chapter we first make that distinction. We then differentiate data models from database models as a natural outcome from how databases relate to the software used to manipulate them. We then present a general database design process that can be used to design geodatabases one of the newest and most sophisticated types of GIS databases currently in use. 1.1 Data, Information, Evidence, and Knowledge A Comparison Data modeling deals with classes of data, and thus is really more about information categories. Some might even say that the data classes are about knowledge, as the categories often become the basis of how we think about GIS data representations. To gain a sense of data modeling, let us define some terms data, information, evidence, knowledge, and wisdom. Those terms are not always well understood in common practice, as in everyday language. Over the years many people have written about the relationship among data, information and knowledge in the context 1-1 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc of information systems (Kent 1984). Defining the terms provides a clearer sense of their differences and relatedness to help with data modeling, as well as provide a basis for understanding how information products relate to knowledge in a broader sense. In a GIScience and systems context, Longley, Goodchild, Maguire, and Rhind (2001) have written about the relationships among all five of those terms (to which some have even added a sixth truth). The definitions that we provide below are based on interpretations of Longley, et al, and integrate Sayers (1984) geographic treatment of epistemology because GIS analysts work in contexts involving many perspectives. Data is/are raw observations (as in a measurement) of some reality, whether past, current, future, in a shared understanding of an organizational context. We typically value what we measure and we measure what we value that is, what is important enough to expend human resources in order to get data? Information is/are data placed in a context for use tells us something about a world we share. Geographic information is a fundamental basis of decision making, hence information needs to be transparent in groups if people are to share an understanding about a situation. Evidence is/are information that is corroborated and hence something we can use to make reasoned thought (argument) about the world. All professionals, whether they be doctors, lawyers, scientists, GIS analysts, etc. use evidence as a matter of routine in their professions to establish shared valid information in the professional community. Credible information is the basis of evidence. How we interpret evidence shapes how we gain knowledge. When we triangulate evidence we understand how multiple sources lead to robust knowledge development as the evidence re-enforces or contradicts what we come to know. Knowledge is the result of synthesizing enduring, credible and corroborated evidence. Knowledge enables us to interpret the world through new information, and of course, data. Knowledge about circumstances is what we use to interpret information and decide if we have gained new insight or not. It is what we use to determine whether information and/or data are useful or not. Wisdom - People who apply knowledge and never make mistakes are commonly characterized as attaining wisdom - if knowledge creation were only that easy. Taking one step after another to build and rebuild knowledge to gain a sense of insight about complex problems has thus far not been easy. Once people have a much more robust knowledge of inter-relationships about sustainability, then and only then will we move to this level of knowing. The purpose of elucidating the above five levels of knowing are meant to provide readers with a perspective that GIS is not just about data and databases, but extends through higher levels of knowing. Given all that has been written and researched about these relationships, most people would say there are many ways to interpret each of the data, information, evidence, and knowledge steps, and that wisdom is often elusive. Nonetheless, with the above distinctions in mind, this chapter presents a framework for understanding the choices to be made in data modeling, and makes use of a framework to distinguish data models and database models that underlie the development of information as the middle initial of GIS. 1-2 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Data modeling is a process of creating database designs. Data models and database models are both used to create and implement database designs. We can differentiate data models and database models in terms of the level of abstraction in a data modeling language. A database design process creates several levels of database descriptions, some oriented for human communication, while others are oriented to computer-based computation. Conceptual, logical, physical have been used to differentiate levels of data modeling abstraction. There is also a difference between a language that can used to describe object classes and the use of that same language to specify the outcomes of in terms of a meaningful set of object classes. The former is called a data model and the latter is called a database model. The second (i.e., middle column) in Table 1.1 lists different levels of abstraction of data models. The third column (right column) lists several levels of abstraction for database models. Table 1.1 Differentiating Data Model and Database Model at Three Levels of Abstraction Schema Language Specifications Levels of Abstraction Schema Language itself, i.e. a data model ala Edgar Codd circa 1979 Result of Schema Language use, i.e. a database model ala James Martin circa 1976 Conceptual - Informal English narrative as objective class description framework; graphical depiction of points, lines, polygons. Specific implementation of object class description framework, e.g., transportation improvement decision making with regard to a specific track Conceptual - Formal Unified Modeling Language (UML) as for example used in the MS Visio software. Specific implementation of UML for a particular application, e.g., transportation improvement decision making whereby any kind of logical data model could be used to implement a data base. Logical Geodatabase data model (GDBDM) implemented in the MS SQL 2005 relational database management system. A set definition of constructs stored using spatial and attribute data types. Geodatabase database model (GDBDBM) is a specific implementation of the geodatabase data model for a particular application, e.g., an information base to support transportation improvement decision making Physical MS SQL 2005 implemented on the Windows Server 2003 operating system using a well-specified set of data types for all data fields in all tables (relations) in the Geodatabase The improvement program database implemented in MS SQL 2005 implemented on the Windows Server 2003 operating system No other terms have been proposed to clarify this important nuance, even the term object class has some difficulty when dealing with databases and programming languages. Nonetheless, one important thing to remember is that the database model is still an object class description it is not the database per se. The database model makes use of a particular schema language to specify certain object classes that will be used in the creation of a database. A schema language is a language for describing databases (some have called it a data description language). As mentioned in the two-right hand columns of Table 1.1 there is a difference between formulating a schema language (the basis of a data model in the middle column) and a using that same schema language to express a database model (the right-hand column). 1-3 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Thus, one way to think of the difference between a data model and a database model at the conceptual level of expression is to think of the basics of a natural language such as English (the data model) and the use of that language to create a story about the world (the database model). The basics of the English language are constructs such as nouns, verbs, adjectives, adverbs, prepositions, etcetera, plus the rules those constructs use (similar to a data model language). When we put the constructs and the rules of English to use, we can create a story about a particular place at particular time (as for example, a story about a communitys interest in land use and transportation change). However, neither of these is an actual conversation. A natural language like English provides us an ability to communicate. One kind of communication is telling a story by making use of a language. But, a story has not been told as yet. When a person actually tells the story, a person builds a database of verbal utterances and/or written words. When an analyst tells a GIS story, the analyst creates a geospatial database and then proceeds to elaborate on the use of that database through a workflow process as described in chapter 3. So, data models as languages are formed using basic constructs, operations, and constraint rules. Such languages provide us with the capabilities to develop specific database designs. When we create the database designs they are like a type of story about the world that is, we limit ourselves to certain constructs (categories for data) together with some potential operations on data, to tell a story through a template for a database representation. We thus include some feature categories of points, lines, and polygons, but we also exclude others. It depends on what we want to do (what kind of data analysis or display might we perform) with the story. The constructs of a data model and how we put them to use are often referred to as metadata. Metadata is information about data. It describes the particular constructs of a data model and how we make use of them. Data category definitions, need to be meaningful interpretations in order to be able to model data. However, there are at least three levels of metadata (data construct descriptions and meaningful interpretations) in a data modeling context as indicated in Table 1.1. So let us unpack those three levels for each of the data model and database model interpretations. 1.2 Data Models the Core of GIS Data Management Just before we get started, consider for yourself which of the three levels of conceptual, logical, and physical is more abstract, i.e., is more general or more concrete for you. The conceptual level is about meaning and interpretation of data categories. The physical level is about the bits and bytes of storing the data. For some, the conceptual model is more abstract; while for others, the physical model is more abstract. From the point of view of a database design specialist, the general to specific detail proceeds from conceptual, through logical, through to physical. With that in mind we tackle the levels in the order of abstractness. A conceptual data model organizes and communicates the meaning of data categories in terms of object (entity) classes, attributes and (potential) relationships. This interpretation of the term data model is often credited to James Martin (Martin 1976), a world-renowned information systems consultant, having authored some 25 books as of the mid 1980s. Many of these books described graphical languages for specifying databases at an information level of design. That level was called the infological level by Sundgren (1975, distinguishing it from the datalogical level, 1-4 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc to highlight the difference between information and data. Information was defined as data placed in a meaningful context, i.e., an interpretation of data perhaps given by a definition or perhaps by what we expect others to know a common knowledge about the world. A major challenge for researchers has been the development of a computable conceptual data model. The first language developed to approach a level of rigor for potential computability was the entity-relationship model (Chen 1976). Although the language was not entirely computable, it was credited as the first formal language to be used widely for database design, as many software systems were designed and implemented based on that model. In addition, other researchers worked on data models that became know as semantic data models (Hull and Kling 1987). A semantic data model incorporates meaning into the database storage, rather than external to storage. Considerable development of semantic data models, and in particular use of the entity-relationship model occurred across the 1980s and 1990s. Still motivated by a challenge for a computable conceptual language, an object-oriented approach to system design became popular in the 1990s (Rumbaugh, Jacobson, Booch 1999), even with geographic information systems context (Zeiler 1999). An object-oriented approach considered object constructs, behaviors of objects, and constraints in systems modeling. Those three approaches were then synthesized into the unified modeling language (UML) in the mid to late 1990s (Rumbaugh, Jacobson, Booch 1999). Nonetheless, whether the objects are specified in a natural (English) language, diagrams, or in UML, the level of specification is still at a conceptual level because these approaches describe data categories, the relationships among data categories, and constraints on those relationships. When a natural language or UML is translated into a computable data language, i.e., a specific data management systems language for framing a database, then the expression is referred to as a logical model. Logical data models, (e.g. object, relational, or object-relational) are the underlying formal frames for database management system software. A logical data model expresses a conceptual data model in terms of (software) computable: a) data constructs (i.e., entity classes or object classes, b) operations (to create relationships), and c) validity constraints. This interpretation of the term data model is often credited to Edgar Codd (1970), who is also the person credited with inventing the Relational Data Model as the design basis of relational database management systems. A logical data model is a formal design for a data management system to be implemented as a software system. Hence, the data construct component of the relational data model is called a set as in a mathematical sense or table headings in a more colloquial sense. The operations component is the relational calculus (later simplified to the relational algebra); i.e., the operations that can be performed on the set constructs. The validity constraints are rules for manipulating data in a database to keep the database from getting corrupted. The term logical data model stems from the logic of the relational calculus, with it is a formal body of rules for operating on data. However, other forms of logical data models exist, like object models as in the geodatabase model from ESRI now in use that do not have as much formal mathematical background, but nonetheless, are useful ways of storing data in a computer. Logical data model operations work on data constructs within the constrained realm of the validity rules, thereby deriving different information even if the original data are the same. A 1-5 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc shapefile data model, coverage data model, and a geodatabase data model would implement the same conceptual data model differently, because the data constructs, operations, and validity constraints of the data models are somewhat different. The data constructs, operations, and validity constraints of each of the models provide a different way to derive information from data. The data models were invented by ESRI lead technical staff at different times to satisfy different information needs. A physical data model is a logical data model that has been detailed further to include performance implementation considerations. A physical data model expresses a logical data model in terms of physical storage units available within a particular database programming language implemented within the context of a particular operating system. A physical data model includes capabilities to specify performance enhancements such as indexing mechanisms that sort data records. Database languages (e.g. the structure query language SQL) are special implementations of more general programming languages (e.g., C or C++). The data constructs (data structures) of programming languages are used to develop the data constructs (actually database structures) of database languages. The process sequence of a database language is implemented using the process sequence of programming languages. Why does any of this really matter? Differences in data models (whether at the conceptual, logical, and/or physical level) dictate the differences in data constructs used to store data, the differences in operations on those data for retrieving and storing, plus the differences in validity constraints used to ensure a robust database. Correspondingly, once one chooses to use (or has no other choice to use) a data model, then only certain database constructs (ways of describing the world), operations (ways of analyzing the data), and validity rules (ways of insuring robust results) are possible within your database model, i.e, the design of your particular database. No wonder ESRI has created to many, as they kept discovering new ways to work with GIS data. A sharp GIS analyst will discover ways of moving data between data models, while retaining the original intent of the database design. Below we provide frameworks for choosing conceptual and logical data models appropriate to your task of data representation. 1.2.1 Conceptual Data Models As per Table 1.1, we can use a natural language such as the English language or a database diagramming language to express the main ideas in a database design. Our choice really depends on the people participating in the design. Natural language has the advantage of being more easily understood by more people. However, natural language has its limitations in that it is not often as clear or precise, because it is unconstrained in its semantic and syntax expression. People express themselves with whatever constructs (nouns) and operators (verbs) they have learned as part of life experience. A diagramming language, for example an entity-relationship language, is a stylized language. That is, it adopts certain conventions for expression. As such, the expressions tend to be clearer than natural language. However, people need to learn such a language, like any language, to be proficient in expression. 1-6 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc 1-7 The following are English language statements lead to comparable expressions in an Entity-Relationship (ER) language (see table 3.2 for entity classes). The language is the oldest conceptual database design language in use, popularized in by Peter Chen (1976); it was actually his dissertation. It quickly caught on because of its simplicity, but also because data management technology was growing in importance and attention in the information technology world. - The facilities will be located on land parcels, with compatible land use - Streams/River should be far enough away from the facility - Street network will service the facility PIN Parcel Figure 1.1 Simplified entity-relationship diagram showing only entity classes (boxes) with attributes (on the right side after lines), and showing no relationships. In a natural language, nouns are often the data categories. The expressions often provide information other than categories, such as surrounding features. In English the categories could be either singular or plural form as a natural outcome of usage. In an ER expression, by convention, the data categories are singular nouns. Nonetheless, there is a correspondence between the English and the ER expressions, that is, nouns are the focus of data categories. As the ER language is part graphic and part English, we can also use a purely graphical language to depict the differences among geodata entity types, particularly in consideration of the spatial aspect of geodata. Remember earlier, you learned that there are three special aspects to data models, the constructs, operations (that establish relationships), and integrity/validity constraints (rules). As the first aspect, spatial data constructs in geospatial data models are composed of geospatial object classes; also called data construct types by some people (Figure 1.2). The geospatial data construct types are different from each other due to geometric dimensionality and topological sizeuse Adjacent to section_ID Stream/River name length Crosses Street Street_seg_ID name Section length Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc relationships stored (or not) as part of the data constructs. Basic geometry is given by dimensionality. Data construct types of a 0, 1, 2-dimensional character are shown in Figure 1.2. Points are 0-dimensional mathematical object constructs defined in terms of a single coordinate (or tri-ordinate space). However, shape (e.g. shape of a polygon) within a dimensionality is a natural outcome of the storage of specific coordinates. The coordinates are an outcome of the measurements of locational relationships. 1-8 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Figure 1.2. Common geospatial data construct types for raster and vector data models (National Institutes for Standards and Technology 1994 and Zeiler 1999) 1-9 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Some data models contain only geometric geospatial constructs, i.e., just points, lines and polygons, represented through use of coordinates. No spatial relationships (called topology) are stored in the data model constructs. These relationships would have to be computed if they are to be known. This leads us to the importance of a second aspect dealing with data models is the operations. The second major aspect of data models concerns operations, i.e., relationships among constructs. Operations are a way of deriving relationships. Topology is the study of three types of relationships, connectedness, adjacency, and containment among objects embedded in a surface. Topology can be stored (represented) implicitly or explicitly in a data model. The implicit representation of topology stems from using simple constructs in a representation, as in the instance of a construct such as a cell in a grid structure or pixel in a raster structure. The cells or pixels in their respective data structures are each the same size. Thus, the relationship termed adjacency, meaning next to, can be assumed/computed based upon cell size. Connectedness as a relationship derived from the adjacency can be determined by taking a data structure walk from one grid cell to the next. Adjacency and connectedness derive from the same next to relationship. When geospatial objects are not the same size, then adjacency must be stored. The most primitive topological object is called a node. The relationship that occurs when two nodes are connected is called a link. Vector data constructs such as the nodes and links in Figure 1.2 must have explicitly stored relationships to express adjacency, connectedness and containment in order to compose a topologic, vector data model. A third major aspect of conceptual data models are the types of rules that assist in constraining operations on data elements. One important type of rules is a validity rule. A validity rule maintains the valid character of data. No data should be stored in a database that does not conform to the particular construct type which is being manipulated at the time. Another kind of validity rule is how relationships among data elements are established. For example, object-oriented data models that can represent the logical connectedness between features such as storm sewer pipes. In such a data model each segment in an object class called storm sewer pipe is to be connected to only one other storm sewer pipe unless a valve or junction occurs. Then, three pipes can be connected. In addition, storm sewer pipes can only be connected to sanitary sewer pipes if a valve occurs to connect them. Why choose to use one conceptual data model rather than another for any particular representation problem? Each has its special character for depicting certain aspects about the geospatial data design. None is particularly superior for all situations. 1.2.2 Logical Data Models Logical data models are developed as a result of including certain geospatial data constructs in the software design of the data model in particular ways. When we choose particular ways of representing data this both enables and constrains us to certain data processing approaches. Several GIS software vendors offer various approaches to logical data models; it is what distinguishes one solution from another. 1-10 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc There are several GIS software vendors that provide great solutions for GIS computing directed at various market segments. As such, they have a tremendous assortment of GIS software from which to choose. Among the vendors and products are ESRI with ArcGIS, Unisys with System 9, Caliper Corp with TransCAD, GE Energy with SmallWorld GIS, and MapInfo Corporation with MapInfo. As mentioned previously, a data model consists of three components data constructs, operations, and validity rules. The combination of these three components is what makes data models different from one another. However, the data construct component is commonly viewed as the most fundamental information because without data constructs, there would be no data, hence no need to perform data processing. The different vendors offer different nuances in their data models. As there are much too many to cover in the space of this textbook, therefore, we have a look at the most popular (largest selling) among them, the ArcGIS data models from ESRI. ESRI has been developing and distributing GIS software for over thirty years. ESRI is a world leader in GIS software in terms of number of installations, which is why we use their data models as a basis for this discussion of logical data models. Because the installed customer based is so large, legacy issues must be addressed, i.e., installed software of older database systems. There is a tremendous challenge to both develop new approaches to geospatial data organization, while simultaneously maintaining an installed, legacy base. That is why conversion programs and vendors exist doing good business. ArcGIS logical data model languages are the Raster or Image/Grid data model, triangulated irregular network (TIN) data model, shapefile data model, coverage data model, and the geodatabase data model. The TIN and the Grid are often used to represent continuous surfaces. The shapefile, coverage, and geodatabase data models are used for storing points, lines and areas that represent mostly discrete features. Early on many researchers distinguished the two types as raster and vector data models. Later on others referred to the difference as objects and fields (Cova and Goodchild 2002). There are fundamental differences in surfaces/fields and objects/features. What that leads to is a difference in the design and implementation of data models described terms of the three components: data constructs, operations, and validity constraints. We pick up from the conceptual data constructs of the previous section and show how implementation of those constructs has lead to organizing a data model in a particular way. First we will treat all data constructs. We then address the operations for each, and finally the validity constraints. 126.96.36.199 Data Constructs of Five ESRI Data Models Differentiating the data models in terms of data construct types is the most well-known distinction among them. In Table 1.2 the data models are listed left to right roughly in terms of the complexity of the data model although it is only a rough approximation. The constructs provide a comparison of basic structure among the five data models. 1-11 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc The Raster, Grid and TIN data models are used for representing surfaces. There are differences among them in terms of the spatial data construct types used to represent surfaces (See Table 1.2). The grid provides for a coarse resolution of sampling of data points that are commonly arrayed as a regular spacing. The grid data model is meant to represent elevation surface. The grid data model is also known as a digital elevation model (DEM), because that is the topical area in which it received considerable use. Rather than focusing on points of information content that stand out for special reasons, the grid data model samples points at regular intervals across a surface. As such, it uses considerable data as it is an exhaustive sampling technique. Topological relations among grid points are implicit in the grid. Because topological relations are implicit, geometric computations are very quick. Table 1.2 Spatial Data Construct Types Associated with Data Models Logical Data Models Raster Data Models Vector Data Models Spatial Data Construct Type Image Grid TIN Shapefile Coverage Geodatabase Image cell X X Grid cell X X Point X X X Multipoint X X Node X Segment/ Polyline X X X Link X Chain / Arc X Face X Tic X Annotation X X X Simple junction X simple edge X Complex junction X Complex edge X Section X Route X Ring X X Polygon X X X Region X Network X X The raster data model became popular when satellite imagery was introduced. The density of the regular spacing of points became quite high, and a variety of software has been developed to store and manipulate images. As such, the raster data model includes both image data model and grid cell data model. The pixel points are commonly regularly spaced, although theoretically they would not have to be regularly spaced. Data processing of regularly spaced points is much easier than for irregularly spaced points (samples). The TIN data model is meant to represent elevation surface, but any surface can be modeled in a TIN. A TIN takes advantage of known feature information to compose the surface representation, and is thus parsimonious with data. Peaks, pits, passes, ridges and valleys can be included in the model as high information content locations, also called critical points. They are critical for capturing the lows and the high elevation points on a surface. Topological relations among vertices are explicit in the TIN using nodes (peaks and pits) and links (valleys, passes, 1-12 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc ridges etc.) to represent the surface. As three points define a plane, the surface planes bend easily along the edges of the planes to characterize a surface. Note the peaks, pits, passes, ridges and valleys, along the edges that can be used to trace a path. The shapefile data model contains features with no topologic relationships, it contains geometry only (See Table 1.2). The points in a shapefile are commonly irregularly spaced representing point-like features in the world. They are taken individually to be meaningful. This is perhaps the major difference in points within the three surface data models described previously and the shapefile, coverage, and geodatabase data models. The multipoint spatial construct can be used to represent a cluster of points, such as a given set of soil samples taken in a field at one point in time. That specific set is retrieved with a single ID, rather than every point having its own ID. The line within the shapefile data model can be line segments (straight line from point to point), circular arcs (parameterized by a radius and start and stop points), and Bezier splines (multiple curves to fit a series of points). The coverage data model had been the mainstay of ArcInfo software for almost twenty years. It is also called the georelational data model, composed of spatial and attributes data objects. The coverage includes feature classes with topologic relationships within each class (no topology between layers); e.g., a river network would not be part of a transportation network if the transportation network is a highway network (See Table 1.2). The primary objects are points, arcs, nodes, and polygons within coverages. Topological arcs and non-topological arcs (polylines) are possible. Arcs close (start and end coordinates match) to form a ring (boundary) of a polygon. Secondary objects are tics, links, and annotations. Transect intersection coordinates (tics) are used to provide the spatial reference. The geodatabase data model is a recent inclusion in ArcGIS. It contains objects that provide functional logic, temporal logic as well as topo (surface) logic relationships. Logical relationships with constraints provide the most flexibility for modeling feature structure and process (See Table 1.2 for geodatabase constructs). The feature classes can be collected into similarly themed structure in what are called feature datasets. Topologic relationships can span feature classes when included in a feature dataset. The base features include categories for generic feature classes and custom feature classes. The generic feature classes include; point, multipoint, line (line segment, circular arc, Bezier spline), simple junction, complex junction, simple edge, complex edge, and custom feature classes. One fundamental question is why use one data model rather than another. Let us consider some of the advantages and disadvantages of the data models in relation to each other (Zeiler 1999). In the geodatabase model, the spatial data and attribute data are at the same level of precedence. That is, either can be stored followed by the other. However, in the coverage data model, the spatial data geometry must be stored first, and then attribute data. The shapefile data model must also store a geometry first (point, polyline, polygon), then the attribute can be stored. For temporal data in the geodatabase,it is stored as an attribute as it is in the shapefile and the coverage, but with its own special domain of operations. The geodatabase data model was developed in order to provide for built-in behaviors feature ways of acting (implemented through rules) can be stored with data. In contrast with the coverage and the shapefile data models, the geodatabase manager performs data management using a single database manager as 1-13 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc a relational object, rather than as file management as in the shapefile and file management and database management as in the coverage data model. Large geodatabases do not need to be tiled (squares of space physically managed) using a file manager as in the coverage data model. There is no opportunity for very large database management in the shapefile model. In addition, the geodatabase environment allows for customized features like transformers, parcels, pipes (not geometry defined, but attribute defined). 188.8.131.52 Relationships Underlying the Operations of Five ESRI Data Models The second major component of a data model is the set of operations that can be applied against the data constructs for that model. There are four basic types of operations for data management: create (store), retrieve, update, and delete. Of course, what actually gets created, retrieved, updated, and deleted is based upon what data model constructs are being manipulated. All of the data models contain many specialized operations that make sense only for that data model because of the inherent information stored within the structure of the data constructs. We will address these operations in more detail in chapter 7 when considering analysis, but let us characterize the major differences among the data models by examining the spatial, logical, and temporal relationships inherent within the models (See Table 1.3). Table 1.3 Spatial, Logical, and Temporal Relationships Underlie Operation Activity. Data models Relationship image grid TIN Shapefile coverage geodatabase Spatial Distance & Geometry derived derived derived derived derived derived Spatial Topologic Explicit/implicit implicit implicit explicit derived explicit explicit Connectedness implicit implicit explicit derived explicit explicit Adjacency implicit implicit explicit derived explicit explicit Containment derived derived explicit derived explicit explicit Function Logic derived derived derived derived derived rules stored Temporal Logic derived derived derived derived derived rules stored Key: implicit stored as part of the geometry, easily derived explicit stored within a field, easily processed derived information computable, but time consuming none cannot be processed from available information Distance operations are one of the fundamental distinguishing characteristics of spatial analysis in a GIS. Thus, distance is derived in all data models. The raster and grid data models contain a single spatial primitive, i.e., the cell/pixel. As such, the spatial topologic relationships are implicitly stored within the data model based on the row and column cell position, making the spatial topologic operations very easy and powerful. Topological relationships are explicitly stored in TIN, coverage and geodatabase models. However, since the shapefile data model spatial data construct types are all geometric, they require computation of topological relationships, which is why that category is labeled derived. There is no logical and/or temporal information inherent in the data models, except for the geodatabase model, thus such information can be computed from attribute storage using scripting language software. A scripting language 1-14 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc is like a high-level programming language (e.g., VisualBasic Scripting). The TIN, coverage, and geodatabase data models are the most sophisticated in terms of storing relationships. These data models support spatial topologic, functional logic and temporal logic operations more flexibly than all others. The newest of the data models is the geodatabase data model. As such, the functional relationships can be storied as customizable rules rather than having to use a scripting language to generate the relationships. 184.108.40.206 Validity Rules of Five ESRI Data Models The third component of a data model is the set of validity rules that restrain the operations from creating erroneous data content. Validity rules operate at the level of attribute field, keeping data contents within a range of acceptable values. For example, coordinates that should be within a particular quadrant of geographic space, or land use codes that must match the allowable zoning regulations. If zoning is quadrant A then, the land use code must be xyz or abc or yyt Because the raster, grid, TIN, shapefile and coverage data models were brought into commercial use before GIS software vendors understood the usefulness of integrity rules, they are not explicitly included in the data models. However, the geodatabase model, being the newest of the data models, contains a variety of integrity rules, and functional integrity rules can be developed. For example, building specification within the data model: kinds/sizes of a valve to connect the water pipes on either side of a valve. Such rules can be considered part of the physical data model level. Next we treat the physical data model level. 1.2.3 Physical Data Models A physical data model implements a logical data model. Data type implementation and the indexing of data type fields are specified at the physical data model level. Data type refers to the format of the data. All of the data fields must have a clear data format specification as to how data are actually to be stored. Potential primitive data types are listed in Table 1.4. for example, some of the data types are used to specify data for a transportation feature class in Figure 1.5. Data indexes support fast retrievals of data by pre-sorting the data and establishing ways to use those sorts to look at only portions of the data when wanting to find a particular data element. Specifying data formatting and indexing details helps the database design perform well when transformed into an actual database. Table 1.4 Data Types Numeric Integer positive or negative whole number, usually 32 bits Long Integer positive or negative whole number, usually 64 bits Real (floating point) single precision decimal number Double (floating point) double precision decision number Character (text string) alpha-numeric characters Binary numbers stored as 0 or 1 expression Blob/Image scanned raster data of usually very large size Geometry shape - Figure 1.2 are shapes 1-15 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc MicroSoft Office Visio UML static diagrams use a table with tabs to specify data types as part of defining class properties (See Figure 1.3). Once the pointer in the categories of Figure 1.3 is set to attributes, the data types in the attribute portion of the window are set through a pull down window under the type heading. Figure 1.3 Physical schema specification for data types depicted using MS Office Visio UML class properties. As a performance enhancement, indexing of fields can be added to the physical schema (Figure 1.4). Because databases commonly get very large, indexes are added to the schema to improve data retrieval speeds. For example, R-trees (short for region trees) or quad trees (that subdivide into quadrants) are very popular means of partitioning a coordinate space without adding tremendous overhead for storage. These trees are logical organizations rather than physical organizations of data, but are part of the physical schema because they generate information at a very detailed level. The increase in speed of access to data for searching is well worth what little extra space is required. 1-16 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Figure 1.4 Indexing specification. Another consideration is where to store data on disk. It makes sense to keep data that are commonly retrieved with a similar time span to also be located near one another on the physical hard disk. The so called disk arm does not need to move as much. This will enhance the performance of retrieval for very large data sets, but would not be very noticeable for smaller data sets. When a hard disk is not de-fragmented periodically, the data are stored in many different physical locations, taking longer to retrieve. 1.3 Database Design Process A database model (using a particular data model schema) is an expression of a collection of object classes (entities), attributes and relationships for a particular subject context, e.g. land resources, transportation resources, or water resources. Even more accurately, that context might be an application or set of applications for a particular topical domain of information like a transportation improvement programming or hydrological planning situation where the decision situation matters considerably. A database model can be expressed at each of the three levels of abstraction, conceptual, logical and physical as described previously. These are called levels of database abstraction because we choose to select (abstract) certain salient aspects of a database design. Different data models (languages) as presented in the previous sections are used to create the database models. 1-17 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Representation in a database model depends on what aspects of the world need to be modeled and what choice was made of a data model to implementation that representation. What ESRI refers to as data models at their technical support website for data modeling are what we call database models at the conceptual level of design, with several logical characteristics (See ESRI 2006). The database models can be used to jump start GIS geodatabase designs. The descriptions of feature classes and attributes as a conceptual database design can be translated into logical database models (your choice of shapefile, coverage or geodatabases data models) and then into physical database models. Having the opportunity to fine tune the storage retrieval and access of the database would be accomplished through the physical data model as implemented within particular data management software for a particular type of operating system. In review, the importance of the conceptual level is the name and meaning of the data categories (feature classes). The importance of the logical level is the translation of that meaning (and we could also say structure of meaning as well) into several attributes for measurement (i.e., potential computable form) in a database management software system. The importance of the physical level is the actual storage of the measurements and the performance of the particular database being stored in terms of how data will be stored and retrieved from the disk. Below are steps outlining a geodatabase design process adapted from Arctur and Zeiler (2004) Designing Geodatabases, with some additions to complete the data modeling process set within the context of the Greenvalley project we introduced in chapter 3. The process includes conceptual, logical, and physical design phases. Each of those phases ends in the creation of a product called a database model, i.e., a structural representation of some portion of the world at an appropriate level of abstraction. The schema is the most visual part of that database model design. As with any database design, the schema design is a very time consuming part. As mentioned earlier, a schema is a table structure in a relational model-oriented design. It is very important that data analysts understand how data are organized, and in particular how to create non-redundant data expressions (also called normalize) when designing a database. There are four approaches to building geodatabase schemas in ArcCatalog 9.x as follows. 1. Create with ArcCatalog wizards a. Build tables in ArcCatalog>>right click>>new object 2. Import existing data (and the existing schema) a. Right click the database and import an object. You can also export from the object to the database 3. Create Schema with Computer-Aided Software Engineering (CASE) tools a. Use Microsoft Visio or like software for development of UML 4. Create Schema in the geoprocessing framework a. Use ArcToolbox geoprocessing to create objects To undertake the database design steps in section 1.3 we can use a data modeling language called the unified modeling language (UML), and in particular the artifact called class diagrams to create entity relationship models for the conceptual phase of database design. You have actually 1-18 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc seen a conceptual class diagram previously in Figure 3.3, depicting the Greenvalley database. A readable tutorial about how to use UML to render entity-relationship diagrams is available from IBM (2003). ESRI (2003) provides a UML tutorial showing how we can use the geodatabase data model to create a database model. The Geodatabase Diagrammer command in ArcCatalog will create a MS Visio diagram portraying summary and detail descriptions of geodatabase schema information (Figure 1.5). However, it requires that you already have a geodatabase, rather than trying to design one. The diagram will replicate the look and feel of the standard ESRI data model posters. You will have to move things around and add descriptive text to enhance the readability. If the command script is not available in your ArcGIS desktop installation, it is available for download from the ESRI Arcscript web page. To get the Geodatabase Diagrammer script, go to http://arcscripts.esri.com and use the text string geodatabase diagrammer in the search box (the URL changes form time to time). Figure 1.5 Geodatabase Diagrammer detail output for a single feature class. In the following subsections we develop a geodatabase design of the Greenvalley project using nines steps categorized in terms of the three database model levels introduced previously. We provide an overview in Table 1.5. Table 1.5 Geodatabase Database Design Process as Data Modeling Conceptual Design of a Database Model Identify the information products or the research question to be addressed Identify the key thematic layers and feature classes. Detail all feature class(es) Group representations into datasets Logical Design of a Database Model Define attribute database structure and behavior for feature classes Define spatial properties of datasets Physical Design of a Database Model Data field specification Implementation Populate the database 1-19 http://arcscripts.esri.com/Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc 1.3.1 Conceptual Design of a Database Model A conceptual data model language commonly consists of a simple set of symbols, e.g., rectangles for data constructs, lines for relationships, and bulleted labels for attributes, that can be used to compose a diagram to provide a means of communicating among database designers. We use the conceptual data model language to specify a conceptual database model so we can understand the particulars of a database design domain like transportation planning, or land use planning or water resource planning. The simple diagrams are as close to everyday English as we can get without loading the diagrams with lots of implied, special meanings. We make use of the UML because ESRI has provided a utility to convert UML conceptual diagrams into logical schemas. As part of the conceptual phase of database design, some of the steps use data design patterns. Data design patterns are reoccurring relationships among data elements that appear so frequently we tend to rely on their existence for interpretation of data. Data design patterns are similar to database abstractions identified some 20 or so years ago, i.e., a relationship that is so important that we commonly give it a label to provide a general meaning for the pattern. By the early 1980s, four database abstractions were identified in the semantic database management literature classification, generalization, association, and aggregation (Nyerges 1991). These four data abstractions relate directly to data design patterns, and are used in the ArcGIS software (ArcGIS names are in parentheses to follow): classification (classification), association (relationships), aggregation (topology dataset, network dataset, survey dataset, raster dataset), and generalization-specialization (subtype). Such design patterns (abstractions) specify behaviors of objects within data classes to assist with information creation. The products of a conceptual stage of database design helps analysts and stakeholders discuss the intent and meaning of the data needed to derive information, placing that information in the context of evidence and knowledge creation. That is, both groups want to get it right as early as possible in the project before too much energy is expended down the wrong path. 1. Identify the information products or the research question to be addressed Most every project has a purpose and requires a set of information products that address the purpose. To develop the best information available, identify the information products to be produced with the application(s). For example a product might be a water resource, transportation, and/or land use plan as an array of community improvement projects over the next twenty years. Another could be a land development, water resource, or transportation improvement program that is a prioritized collection of projects within funding constraints over the next couple of years. The priority might simply be that we can only fund some of the projects among a total set of projects recommended for inclusion in an improvement program. A third product might be a report about social, economic, and/or environmental impacts expected as a result from the implementation of one or more of those projects in an improvement program. A GIS data designer/analyst would converse with situation stakeholders about the information outcomes to appear in the product, rather than guess. If you are the stakeholder, then mull it over 1-20 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc a bit to make sure you have an idea. Some guidance should be available in terms of a project statement. In the Greenvalley project, the City Council provided the purpose and the objectives in siting a wastewater treatment facility. In another context, perhaps the purpose is a research statement, in which one or more research questions have been posed. Sometimes such questions are called need to know questions. For example, what do the stakeholders need to know about the geographical decision situation under investigation? What are the gaps in information, evidence, and/or knowledge? What information is not available that should be in order to accomplish tasks related to decision situations? What changes (processes) in the world are important to the decision situation? What are the decision tasks? Those questions should help the reader articulate information needs as a basis of data requirements. From a landscape modeling perspective as described in chapter 3, we can develop value structures that underpin the information needs of decision models. What we store in databases are data values to be able to derive information from the representation models through to decision models. From where does this value arise? What fosters the development of certain (data) values in our databases? Looking back to the conceptual data modeling process, there is undoubtedly some reason why certain data categories are chosen and others not. The answer lies in what is valued to be represented. The single most important factor determining the future of our environment is peoples sense of values. The problems of the environment are not, fundamentally, scientific or technical they are social Values are the hardest things to discuss, but societys values are the driving force which determines what it does and does not do. Only when we know who we want to be and why, can we start to question whether our current actions are true to that ideal. (IUCN 1997 pp. 16-18) A chain of influence appears on p. 18 of IUCN (1997) linking the conditions of the environment to problematic human behavior, which in turn are linked to motivating values and power to act, and then again are linked to design intervener action. Part of that intervener action is the appropriate data representations of problems and solutions. Building a database that recognizes peoples values of the world in certain ways is a step toward understanding what might be done to sustain and/or improve certain social, economic and ecological conditions. There is a connection between community values and plans, and databases developed to provide a basis for data analysis to create those plans. In a pluralistic society, multiple values are common. It is important to understand how data might reflect certain desired states of concern about the social, economic, and/or ecological environment. We often measure what we value and value what we measure. Thus, as databases are stored measurements, we are lead to analyze what we value and value what we analyze. Consequently, we then map what we value and value what we map. As such, it is important to understand how databases, and the maps created from them, might then reflect certain valued states of society, and perhaps not others. To address this issue, we already introduced the concept of a concerns hierarchy which translates into a values structure in chapter 3 Table 3.6, but we will elaborate here. It is useful to remember that the value structure is part of the informal conceptual specification of the database design mentioned in Table 1.1. Below we provide examples of their use for values-informed database design. 1-21 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc One can create a conceptual database model (diagram) by enumerating categories of concerns that people have about the topic no matter how broad. Naturally, the larger the problem, the more concerns, the longer it takes to enumerate concerns. When we fail to do this as database designers, then voices are not likely to be heard. This is a first step in the database design process from a multi-valued perspective. After concerns are enumerated, they can be organized into a concerns hierarchy to show the more general and more specific concerns, and reduce (or eliminate) redundancy (but retaining important/frequency) of concerns. The organization of the concerns hierarchy is the basis for devising a value structure, and one of the most important steps in database design when people have varying interests in topics, we might call these conflicting concerns. A concern is a general way of describing a number of more abstract terms, such as values, goals, objectives, and criteria. This was accomplished and portrayed in Chapter 3 as Table 3.6 for the Greenvalley GIS project. Wachs and Schofer (1969) long ago recognized the importance in distinguishing different levels of abstraction in the language of concerns when they wrote: Values, goals, objectives, and criteria are words to which transportation planners often refer without agreement as to the distinctions between them or the functional dependence of each upon the others. The primary reason for the existing confusion among the terms mentioned is the fact that all of the words introduced above are high-level abstractions. This is another way of saying that these terms may not be adequately defined by reference to something observable in the physical work. They are defined in terms of other words where they too may have no physical referents. The careful formulation of definitions of these terms has more than academic value. Since the transportation planner deals ultimately with facilities made of concrete, steel, and rubber, he must discuss goals, objectives and values in such a way that he can eventually relate these abstractions to the physical facilities of the city. The process of constructing the definitions for these terms leads to a clearer understanding of their importance in the urban transportation process, of the functional interdependencies among the concepts represented by each term, and their ultimate relationship to decisions about concrete and steel. (Wachs and Schofer 1969 pp. 134-135) Once a concerns hierarchy has been developed, it is then possible to label those concerns in terms of their meaning about values, i.e. in terms of values, goals, objectives, and criteria (attributes in GIS). Labeling the concerns is a consensus-based process: obviously some important concerns could go un-addressed. Performing that process results in a structure of values as general concerns, objectives as more detailed, and criteria as the way we can measure the objectives and hence put real meaning to deep-seated values. The structure could take the form of a tree/hierarchy or network depending on the overlap of the concerns. That value structure can then be used to formulate a conceptual database design. Sometimes people are able to define classes of data by knowing the subject matter, and simply writing out the class name and attributes. However, some people prefer to do some empirical work first, i.e., create instances of those classes, by writing them out in a word processor table, spreadsheet, or just a text document. Then, they generalize over those instances to create the 1-22 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc field names and an object class specification. Once several feature classes have been entered we can discuss the specification, including the relationships among those classes, which leads us to the next step. 2. Identify the key thematic layers and feature classes A thematic layer is a superclass of information, commonly consisting of a dataset(s) and perhaps several feature classes (hence feature layers), convenient for human conversation about geographic data. For each thematic layer, specify the feature classes that compose that thematic layer. For each feature class specify the data sources potentially available, spatial representation of the class, accuracy, symbolization and annotation to satisfy the modeling, query and/or map product applications. The Greenvalley geodatabase database design overview depicted in Figure 1.6 contains several feature datasets that are the thematic layers in the database design. Although the original Greenvalley GIS project did not need feature datasets because the siting problem was cast as a rather simple problem, the expanded GIS project can make use of them. Each dataset is created using a package in UML class diagrams (Figure 1.7) 1-23 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Figure 1.6 Greenvalley conceptual overview diagram generated as a result of UML database design also called static structure diagram in Microsoft Office Visio. 1-24 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Figure 1.7 Packages in the Greenvalley UML conceptual database design. Even the database specification outlined in Figures 1.6 and 1.7 is not as comprehensive as it could be for a small-area planning decision situation, as there are many information categories that could be added to make the project more informative. For example, we might enumerate the information categories and data layers as in Table 1.6. The data layers in the original Greenvalley GIS project are indicated in the second column with an asterisk *, whereas the additional data layers needed for the enhanced analysis are indicated with a plus +. It is the information need in the first column that is driving the need for data. Table 1.6. Geographic Information Categories and Data Layer Needs for the Original Greenvalley GIS Project and an Enhanced GIS Project Geographic Information Needs (based in part on Table 3.3) * original Greenvalley Project + enhanced Greenvalley Project Geographic Data Layer (based in part on Table 3.3) * original Greenvalley Project + enhanced Greenvalley Project Environmental characteristics + Soil characteristics * Topography * Water courses + Layer: Soil series Source: NRCS Map use: display and analysis of soil characteristics * Layer: Elevation (DEM) Source: Greenvalley DOT; USGS Map use: display and analysis of topographic terrain * Layer: National Hydrographic Dataset 1-25 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc * Land cover/use + Natural hazards * Ecologically sensitive areas Infrastructure characteristics + Buildings * Transportation * Utilities + Other utilities Land designations + Zoning Source: Green County; USGS and EPA Map use: display and analysis of surface water flows and water quality Layers: * Parcel boundaries * Land use + Site address Source: City of Greenvalley + Vegetation/Land cover Source: Multiple agencies (EPA, USGS, BLM, various state agencies) Map use: display and analysis of vegetation land cover Layers: + Geohazards + Floodplain areas + Tsunami-prone areas + Historic wildfires Source: multiple agencies including USGS, US Forest Service, state agencies Map use: display and analysis of natural hazard risk Layers: * Wetlands/Lowlands * Parks Source: City of Greenvalley + Protected areas + Protected habitats Sources: multiple agencies including USGS, EPA, US Forest Services, GAP-Analysis program, state agencies Map use: display and analysis Layers: + Building footprints * Roads * Streets * Sewer lines Source: City of Greenvalley + Gas and electricity lines + Water lines + Ground water wells + Septic tanks + Landfills Source: various local land use and planning management agencies, DOT, local utility providers Map use: display and analysis + Layers: Restrictions on uses Source: local land use planning and management agencies Map use: display and analysis 1-26 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Land administration + Boundaries Land ownership + Boundaries Layers: + Administrative areas + Cadastral framework (Public Lands Survey System PLSS) Sources: U.S. Census Bureau, local and regional land surveys Map use: display and analysis Layers: + Ownership and taxation + Parcel boundaries + Survey network Source: local land use planning and management agencies, local land surveys Map use: display and analysis 3. Detail all feature class(es) For each feature class, describe the spatial, attribute, temporal data field names. For each feature class specify the range of map scale for spatial representation, and hence the associated spatial data object types. This will determine if multiple resolution datasets for layers are needed. An analyst would have experience to know what resolutions of feature categories are appropriate for the substantive topic at hand. Revisit step 2 as needed to complete this specification. Identify the relationships among the feature classes. A GIS database design analyst need not use UML to explore and compose the detail. However, this detail will be documented in step 4. 4. Group representations into datasets A feature dataset is a group of feature classes that are organized based on relationships identified among the feature classes that help in generating information needed by stakeholders. The dataset creates the instance of a thematic layer or a portion of the thematic layer in which the relationships among feature classes are critical for deriving information. Analysts name feature classes and feature datasets in a manner convenient to promote shared understanding among analysts and stakeholders. Feature datasets are used to group feature classes for which topologies or networks are designed or edited simultaneously. A feature dataset is but one of several data design patterns provided in the geodatabase data model. A data design pattern is a frequently occurring set of relationships that a software designer has decided to implement in a software system. Discrete features are modeled with feature datasets composed of feature classes, but relationship classes, rules, and domains are three other design patterns. Continuous features are modeled with raster datasets. Measurement data is modeled with survey datasets. Surface data is modeled with raster and feature datasets. These other design patterns are used in more detailed database design below. 1-27 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Each of the feature datasets in the Greenvalley conceptual overview makes use of a more detailed page diagram (Figure 1.8) to document the details of the feature datasets partially identified in step 3. Figure 1.8 Land feature dataset package in the Greenvalley conceptual database design. 1.3.2 Logical Design of a Database Model Data processing operations to be performed on the spatial, attribute, and temporal data types individually or collectively derive the information (from data) to satisfy step 1. Such operations clarify the needs of the logical design. 5. Define attribute database structure and behavior for feature classes Apply subtypes to control behavior, create relationships with rules for association, and classifications for complex code domains. Subtypes Subtypes of feature classes and tables preserve coarse-grained classes in a data model, improve display performance, geoprocessing and data management, while allowing a rich set of behaviors for features and objects. Subtypes let an analyst apply a classification system within a feature class and apply behavior through rules. Subtypes help reduce the number of feature classes by consolidating descriptions among groups, and this then improves performance of the database. Relationships If the spatial and topological relationships are not quite suitable, a general association relationship might be useful to relate features. Relationships can be used for referential integrity persistence, for improving performance of on-the-fly relates for editing, and with joins for labeling and symbolization. 6. Define spatial properties of datasets Specify rules to compose topology that enforces spatial integrity and shared geometry, and specify rules to compose networks for connected systems of features. Topological and network 1-28 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc rules are set to operate upon features and objects. Set the spatial reference system for the dataset. Specify the survey datasets if needed. Specify the raster datasets as appropriate. Topology Topologic rules are part of the geodatabase schema and work with a set of topological editing tools that enforce the rules. A feature class can participate in no more than one topology or network. Geodatabase topologies provide a rich set of configurable topology rules. Map topology makes it easy to edit the shared edges of feature geometries. Networks Geometric networks offer quick tracing in network models. These are rules that establish connections among feature types on a geometric level and are different than the topological connectivity. Such rules establish how many edge connections at a junction are valid. If one were to compose the wastewater features as a network, edges and junctions would be needed as in Figure 1.9. Survey data Survey datasets allow an analyst to integrate survey control (computational) network with feature types to maintain the rigor in the survey control network. Raster data Analysts can introduce high performance raster processing through raster design patterns. Raster design patterns allow for aggregating rasters into one overall file, or maintain them separately. Figure 1.9 Wastewater network class diagram. 1.3.3 Physical Design of a Database Model 7. Data field specification 1-29 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc For data fields, specify valid values and ranges for all domains, including feature code domains. Specify primary keys and types of indexes. Classifications and domains Simple classification systems can be implemented with coded value domains. However, an analyst can address complex (hierarchical) coding systems using valid value tables for further data integrity and editing support (Figure 1.10). Figure 1.10 Data type specifications for land parcel data depicted using MS Office Visio UML class properties. At this time primary and secondary keys for the data fields are specified, based on valid domains of each field. A data key reduces the need to perform a global search on data elements in a data file. Hence, a key provides fast access to data records. A primary (data) key is used to provide access within the collection of features that can be distinguished by a unique identifier (Figure 1.11). When one uses a primary key, you can easily distinguish one data record from another. A parcel identification code is an example of a potential primary key for land parcel data records. A secondary key is used for data access when the data elements are not unique, but are still useful to distinguish data records, as for example land use codes. All land parcel data records of a particular land use code can be readily accessed. 1-30 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Figure 1.11 ObjectID for the primary key depicted using MS Office Visio UML class properties. 8. Implementation of schema Construct the data schema to reside in a database management system. A schema semantics check must be performed to insure a computable schema (Figure 1.12). After running the semantics check and errors are identified, modifications data schema need to be made to rectify the errors described and the semantics check should be run again. A database analyst would address any many errors as possible before rerunning the semantic check. After the error check results in no errors, or least only errors that are tolerable, then an analyst can test the computability of the data schema. 1-31 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Figure 1.12 Schema semantics check. 9. Populate the database The last step is to load the data into the schema, often called populating the database. 1.4 Summary Data modeling is a fundamental concern in GIS databases. Data modeling deals with data classes, information categories, evidence corroboration, knowledge building, and wisdom perspectives. We defined the terms to provide a clearer sense of their differences and relatedness to help with data modeling effort. The differences and similarities do not come easy, as you have to work with the concepts to make them work as second nature. The purpose of elucidating the above five levels of knowing are meant to provide readers a perspective that GIS is not just about data and databases, but extends through higher levels of knowing. Understanding those terms sets the stage for understanding data models and database models. Data modeling is a process of creating database designs. Data models and database models are both used to create and implement database designs. We can differentiate data models and database models in terms of the level of abstraction in a data modeling language. A database design process creates several levels of database descriptions, some oriented for human communication, while others are oriented to computer-based computation. Conceptual, logical, physical have been used to differentiate levels of data modeling abstraction. A data model is the foundation framework that underlies the expression of a database model, i.e. we use data models to design database models. A database model is a particular design of a database, i.e., the design 1-32 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc has some real world substantive focus, not just an abstract expression of data constructs. A conceptual data model organizes and communicates the meaning of data categories in terms of object (entity) classes, attributes and (potential) relationships. Logical data models, (e.g. object, relational, or object-relational) are the underlying formal frames for database management system software. A physical data model expresses physical storage units and includes capabilities to specify performance enhancements such as indexing mechanisms that sort data records. Each of those data models can have a corresponding database model for a particular set of information categories. There are three special aspects to data models, the constructs, operations (that establish relationships), and integrity/validity constraints (rules). As the first aspect, spatial data constructs in geospatial data models are composed of geospatial object classes; also called data construct types by some people. The second major aspect of data models concerns operations, i.e., relationships among constructs. Operations are a way of deriving relationships. A third major aspect of conceptual data models are the types of rules that assist in constraining operations on data elements. A validity rule maintains the valid character of data. No data should be stored in a database that does not conform to the particular construct type which is being manipulated at the time. Differences in data models dictate the differences in data constructs used to store data, the differences in operations on those data for retrieving and storing, plus the differences in validity constraints used to ensure a robust database. ArcGIS software includes a large, (but still not all) set of data models: raster or image/grid data model, triangulated irregular network (TIN) data model, shapefile data model, coverage data model, and the geodatabase data model. The TIN and the Grid are often used to represent continuous surfaces. The shapefile, coverage, and geodatabase data models are used for storing points, lines and areas that represent mostly discrete features. We use data models to create database models. Database models are the outcomes of a database design process. We introduced a geodatabase database design process as a data modeling process consisting of nine steps spread across the three levels of data models, conceptual, logical and physical data models. The conceptual design process that forms a conceptual database model consists of four steps: 1) identifying the information products or the research question to be addressed, 2) identifying the key thematic layers and feature classes, 3) detailing all feature class(es) and 4) grouping representations into datasets. The logical design process that forms a logical database model consists of two steps: 2) defining attribute database structure and behavior for feature classes and 2) defining spatial properties of datasets. The physical design process that forms a physical database models consists of three steps: 1) data field specification, implementation of the schema, and populating the database. The outcome of that process was an extended Greenvalley database design and database. 1.5 References Arctur, D. and Zeiler, M. 2004. Designing Geodatabases, ESRI Press. Chen, P P-S 1976. The entity-relationship modeltoward a unified view of data, ACM Transactions on Database Systems (TODS), 1(1):9-36. 1-33 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Codd, E. F. 1970. A relational data model for large shared data banks. Communications of the ACM 13(6), 377-387. Cova, T.J., and Goodchild, M.F. 2002. Extending geographical representation to include fields of spatial objects. International Journal of Geographical Information Science, 16(6): 509-532 ESRI (Environmental Systems Research Institute) 2006. Data model gateway. http://support.esri.com/index.cfm?fa=downloads.dataModels.gateway, last accessed November 15, 2006. ESRI (Environmental Systems Research Institute) 2003. Building Geodatabases with CASE Tools, http://support.esri.com/index.cfm?fa=knowledgebase.documentation.viewDoc&PID=43&MetaID=658, last accessed November 17, 2006. Hull, R. and R. Kling 1987. Semantic Database Modeling, ACM Computing Surveys, 19(3):201-260. IBM 2003. Entity-Relationship Modeling. http://www3.software.ibm.com/ibmdl/pub/software/rational/web/whitepapers/2003/ermodeling.pdf, last accessed November 15, 2006. IUCN (International Union of the Conservation of Nature) 1997 Approach to Assessing Progress Toward Sustainability in the Tools and Training Series, IUCN Publication Services Unit, Cambridge, UK, available from Island Press, Washington DC. Kent. W. 1984 A realistic look at data. Database Engineering, 7, 22. Longley, P. Goodchild, M. Maguire, M. and Rhind, D. 2001. Geographic Information Systems and Science. Wiley. New York. Martin, J., 1976. Principles of Data-Base Management, Prentice-Hall, Englewood Cliffs. National Institute for Standards and Technology 1994. Federal Information Processing Standard 173-1, Spatial Data Transfer Standard, National Institute for Standards, Gaithersburg, MD. Nyerges, T. 1991. Geographic Information Abstractions: Conceptual Clarity for Geographic Modeling, Environment and Planning A, 1991, vol. 23:1483-1499. Rumbaugh, J., Jacobson, I., and Booch, G. 1999. The Unified Modeling Language Reference Manual, Addison-Wesley, Reading, Massachusetts. Sayer, A. 1984. Method in Social Science, London: Hutchinson. Sundgren, B. 1975 A theory of data bases. New York: Petrocelli/Charter. 1-34 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc Wachs, M., and Schofer, J. L. 1969. Abstract values and concrete highways. Traffic Quarterly, 133-145. Zeiler, M. 1999. Modeling Our World, ESRI Press, Redlands, CA. 1.6 Review Questions 1. Differentiate among data, information and evidence. 2. Why is it important to differentiate evidence from knowledge? 3. Why is it useful to understand the difference between a data model and a database model when choosing a software system versus choosing the data categories to develop an application? 4. Why do we have three levels of database abstraction, conceptual, logical, physical models? 5. What are the three components of every conceptual, logical and physical data model? 6. What is the difference between an image and grid data model? 7. Why did ESRI develop the geodatabase data model? 8. What is a general process for undertaking database design? 9. Why is a concerns hierarchy important to database design? 1.7 Glossary class a generic term for a data category composed by bundling observations of like kinds; for example a feature class in ArcGIS data raw observations for characterizing past, present, future, or imaginary topic (reality) data model the collection of constructs, operations, and constraints that form the basis of a data management system; a concept directed at software design but useful in characterizing the capabilities of a database management system. A data model can be specified at conceptual, logical, and physical levels. database design process composed of specifying conceptual, logical, and physical schemas as the major steps in formulating a database model. database model A schema and data dictionary associated with the outcomes of a particular database design process. data field a fundamental storage unit of data. 1-35 Nyerges GIS Database Primer GISDBP_chapter_1_v17.doc 1-36 data record a collection of data fields. data structure a way of organizing data; a concept similar to abstract data type. data type format the specification of a data field in terms of ways of storing data, for example as a floating point number, integer, text (character) string, blob etc. data type, abstract the specification of a class using data fields in a conceptual manner at the level of a conceptual data model. evidence information n assembled to encourage a common interpretation of a collection of observations. information data situated in a context that takes on meaning for use knowledge evidence brought together than re-enforces (corroborates) an interpretation of data, information, and/or evidence and has withstood challenges about its validity. rules, for a data model a statement about the way to test the believability of data; for example as in validity rules rules to establish the correctness of data stored in a data base. schema description of data categories plus data fields that characterize features (entities) about some portion of the world in a database model. Physical Design of a Database Model1.3.1 Conceptual Design of a Database Model1.3.2 Logical Design of a Database Model1.3.3 Physical Design of a Database ModelWe use data models to create database models. Database models are the outcomes of a database design process. We introduced a geodatabase database design process as a data modeling process consisting of nine steps spread across the three levels of data models, conceptual, logical and physical data models. The conceptual design process that forms a conceptual database model consists of four steps: 1) identifying the information products or the research question to be addressed, 2) identifying the key thematic layers and feature classes, 3) detailing all feature class(es) and 4) grouping representations into datasets. The logical design process that forms a logical database model consists of two steps: 2) defining attribute database structure and behavior for feature classes and 2) defining spatial properties of datasets. The physical design process that forms a physical database models consists of three steps: 1) data field specification, implementation of the schema, and populating the database. The outcome of that process was an extended Greenvalley database design and database.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999298.86/warc/CC-MAIN-20190624084256-20190624110256-00309.warc.gz
CC-MAIN-2019-26
86,429
2
https://emacs.stackexchange.com/questions/56001/emacs-text-mode-as-generic-2-space-tab-plaintext-editor-trouble-disabling-r
code
I need some help getting custom indentation behavior with my .emacs config. In emacs text-mode / Fundamental Mode, by default a tab will take me to the location of the top line's next word (non whitespace char) : reallylongword nextword <TAB>----------^ whereas in 90% of cases, I'd like to do: reallylongword nextword --^ There are two possible ways I believe can achieve this: 1 - Automatically going to minor mode "outline-mode" whenever custom extension ".plain" opened, or using some other improvement on Fundamental Mode. 2 - Keybind TAB to be fixed to 2 spaces while in text-mode -- or make this feature a default, but toggleable. I've looked at the following links for reference already but after trying suggested fixes, I need some help detecting problems in my config. Indents work how I like for Python / C++, etc, just not plaintext. Lots of effort just to avoid [space][space]... but I want to know why my current config doesn't alter text-mode properly. My Config, using Emacs 26 on macOS: (package-initialize) ;; use the $PATH from bashrc (when (memq window-system '(mac ns x)) (exec-path-from-shell-initialize)) (tool-bar-mode -1) ;; Tab settings (setq-default tab-width 2) ;; always using tab = 2 spaces (setq-default c-basic-offset 2) ;; default is 4 (setq-default python-indent-offset 2) ;; default is 4 (setq-default tab-stop-list '(2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84)) ;; Tabs to 2 spaces for text-mode too.. ;; https://stackoverflow.com/questions/3976240/how-to-change-indentation-in-text-mode-for-emacs ;; https://www.xemacs.org/Links/tutorials_1.html ;; indent-for-tab or indent-relative ? (setq c-default-style "stroustrup") ;; Show line numbers (global-linum-mode t) (custom-set-variables ;; custom-set-variables was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(indent-tabs-mode nil) '(package-selected-packages (quote (exec-path-from-shell haskell-mode use-package all-the-icons doom-themes))) '(standard-indent 2)) (custom-set-faces ;; custom-set-faces was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. ) ;;; ~~ ATTEMPTED INDENTATION FIXES ... i.e. random, found emacsLisp hotfixes ~~ ;;; OUTLINE MODE UPON FILE EXT DETECTION: ;; "outline-mode" not recognized.. ;;(setq outline-mode ;; (cons '("\\.plain$" . text-mode) outline-mode)) ;;; RE-BIND indent-line-function to be insert-tab, set to tab-width 2, whenever in text mode (add-hook 'text-mode-hook (lambda () (setq-default indent-line-function (quote insert-tab) ))) ;; recent files emacs (recentf-mode 1) (setq recentf-max-menu-items 25) (setq recentf-max-saved-items 25) (global-set-key "\C-x\ \C-r" 'recentf-open-files) ;; auto close bracket insertion. New in emacs 24 (electric-pair-mode 1)
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00528.warc.gz
CC-MAIN-2020-29
3,057
11
https://stackoverflow.com/questions/56885594/ios-pwa-special-mailto-link-opens-tab-inside-pwa-browser-and-stuck-at-blank-tab
code
The mailto link was working as expected on previous versions of iOS. After updating to 12.2+ when opening default mail app using the mailto link the mail app works. But when coming back to the PWA, the application stuck on a blank white screen due to the iOS feature update to PWA (saving app state when switching apps) Now i'm stuck with a blank screen even after swipe close the app. I'm using the following code to lunch the mail app <a href="mailto:[email protected]" target="_blank">send mail</a> I have tried all other options of targets, only the _blank target opens the default mail app. Other targets are not working as mentioned in this stack overflow answer. have anyone faced similar issue?
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528141.87/warc/CC-MAIN-20190722154408-20190722180408-00232.warc.gz
CC-MAIN-2019-30
702
6
https://listserv.acm.org/SCRIPTS/WA-ACMLPX.CGI?A2=1608A&L=CHI-ANNOUNCEMENTS&D=0&H=A&S=a&P=922327
code
The HCI Bibliography has been updated for the new school year with about 3000 entries added in the past month. See the conference coverage page: and the journal coverage page: Also for the new school year, I've formatted Lewis & Rielman's shareware book: "Task-Centered User Interface Design" in HTML: The HCI Bibliography is a free-access online bibliography of Human-Computer Interaction publications and resources. With over 23,500 entries in a searchable database, the database now contains over 1000 internet resources in its Webliography, in addition to thousands of links to full text articles. The Webliography is used to generate the SIGCHI link pages on education, intercultural issues, kids and computers, publications and 14 general categories of HCI, as well as the link page for SIGCAPH on accessibility. The home page at http://www.hcibib.org/ is now a gateway to some of the most popular online HCI columns and developer resources. Try these for self-education. (The HCI Bibliography has no affiliation with these.) Gary Perlman, [log in to unmask] The HCI Bibliography Project, http://www.hcibib.org/ Post Office Box 20187; Columbus, Ohio 43220 USA
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00155.warc.gz
CC-MAIN-2023-14
1,165
22
http://javascriptunscripted.com/blog/2017/12/
code
Now it’s been some time since HTML5 came on the scene. It’s shiny new glean has been replaced with a well worn veneration and stability. Although it’s made some marked improvements over its predecessors in terms of not requiring laboured attributes or overly strict syntax, one of its tenets means a certain level of backward compatibility still exists. And owing to the certain habits or strict requirements in the past, some developers simply would’ve stuck to those habits or not realised that with the latest iteration, that some items were no longer needed. This lists the obvious, and not so obvious items. This is the most obvious one and one which is required. So, without delving into all the iterations and variances, the last two major branches, XHTML and HTML 4 series required a lengthy declaration of the schemas used for the document type defintion (DTD). HTML5 did away with all the cumbersome description and replaced it with the all too popular and well received Linking to external files HTML5 enables asynchronous loading of scripts without locking the other processes from loading in a browser. This is done by using the async tag <sript src="js/script.js" async></script> How? By using the async attribute preventing users from having to worry about script link placement and all the consequences the decision imbues. Defer is similar to the Async attribute, however, it can be used when multiple scripts are needed and should be loaded in order. Scripts with this attribute will wait to execute until the page has finished loading. These are some of the enhancements for the moment. If there are ones that I have missed, especially major ones, let me know in the comments.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526506.44/warc/CC-MAIN-20190720091347-20190720113347-00005.warc.gz
CC-MAIN-2019-30
1,703
9
https://meta.askubuntu.com/users/139675/j0h
code
been using Linux since 2005. Ubuntu since whenever edgy eft was new. Lucid Lynx Ubuntu was the best Ubuntu I have ever used. Member for 6 years, 8 months 24 profile views Last seen Apr 1 at 2:28 - Ask Ubuntu 10.6k 10.6k 1717 gold badges6464 silver badges140140 bronze badges - Raspberry Pi 2.1k 2.1k 44 gold badges1414 silver badges2727 bronze badges - Unix & Linux 1.7k 1.7k 22 gold badges1919 silver badges2929 bronze badges - Stack Overflow 1.2k 1.2k 22 gold badges1313 silver badges3838 bronze badges - Super User 858 858 22 gold badges1010 silver badges2323 bronze badges - View network profile → Top network posts - 125 "autoreconf: not found" error during making qemu-1.4.0 - 97 What does the lightning bolt mean? - 66 Detect and mount devices - 46 upowerd hogging CPU - 36 upowerd hogging CPU - 32 ssh unable to negotiate - no matching key exchange method found - 30 How does using the tilde work as a shortcut to my home directory? - View more network posts →
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668544.32/warc/CC-MAIN-20191114232502-20191115020502-00348.warc.gz
CC-MAIN-2019-47
972
19
http://halo.wikia.com/wiki/User_blog:FEARDEATH24/Sorting_out_Spoilers
code
I was going through the achievement page and stumbled upon the Halo 4 achievements section. At that point I did not know how long the Halo 4 campaign was, or what other Covenant species still exist in Halo 4. Upon reading the list I foudn out that there are (*) missions in Halo 4. I added the Spoiler templates, but I am unsure of whether it is to be added to all of Halo 4's achivement pages. I am open to suggestions or views on this matter.
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886123359.11/warc/CC-MAIN-20170823190745-20170823210745-00717.warc.gz
CC-MAIN-2017-34
444
2
https://yoursecondcall.com/feedback
code
I deeply appreciate your feedback on the services I provide and all your interactions with Your Second Call. My entire motivation is to help make people’s lives better, so every bit of feedback I get helps me improve my services. Your responses are always private unless you agree to share your review. Thank you for your time! FYI, The embedded form won’t auto scroll back up after you submit. So, thank you for your feedback! 😄
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510903.85/warc/CC-MAIN-20231001141548-20231001171548-00484.warc.gz
CC-MAIN-2023-40
436
4
https://orionfeedback.org/d/1946-keeper-password-manager-extensions-not-loading-anything
code
Steps to reproduce: I installed Keeper Password Manager from the Google Chrome Web Store, and upon clicking the icon a blank box appears and nothing else occurs. Installing the same add-on from the Firefox Add-Ons store leads to the same thing. I disable all other addons and restarted the browser several times, and nothing changed. I expected a small box to appear with a login screen for my Keeper Password vault in order to use on any website I was logging into. Orion Version 0.99.113.2-beta (WebKit 613.1.12), MacOS 12.3; MacBook Pro 16" with Apple M1 Pro: Vs. what normally occurs on the Brave or Safari browser.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103558.93/warc/CC-MAIN-20231211045204-20231211075204-00742.warc.gz
CC-MAIN-2023-50
619
5
http://news.perlfoundation.org/2010/12/perl-oasis-call-for-papers.html
code
- Alien::Base Grant - Report #9 (Final) - 2013Q2 Grant Proposals - 2013Q2 GP: YACT - Yet Another Conference Tool - 2013Q2 GP: rpm.perl.it - 2013Q2 GP: Review of Perl Web Frameworks - 2013Q2 GP: Next Release of Pinto With Key Features - YAPC::NA Call for Sponsors - Outreach Program for Women Moves Forward with €1000 Pledge - Fixing Perl5 Core Bugs: Report for Month 37 - Improving Perl 5: Grant Report for Month 17 About this Entry This page contains a single entry by Josh McAdams published on December 6, 2010 6:20 AM. Final Grant Update: The Perl Survey was the previous entry in this blog. Manual for Game Development with SDL Perl - Grant Report #4 is the next entry in this blog.
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098987.83/warc/CC-MAIN-20150627031818-00078-ip-10-179-60-89.ec2.internal.warc.gz
CC-MAIN-2015-27
688
14
https://bugfox.net/blog/2007/04/12/recovering-from-a-crashed-hard-disk/
code
Two weeks ago the hard disk on my main Windows machine (the one I use for video editing and commercial software in general) failed to the point of becoming unbootable and I have spent most of my free time since recovering. Some random observations. - Even if you are reasonably careful about backing up your important stuff, the disk will die when you have valuable stuff on it that is not backed up. - Build yourself a copy of The Ultimate Boot CD for Windows now. (It will be a lot harder to build if your only Windows machine with a CD burner is dead.) This allowed me to boot up the machine and transfer my data to an external hard drive. - Get yourself an eternal hard drive with a USB connection now. These are reasonably cheap and they are the only really easy way to back up all your important data on a regular basis. - If you buy a new SATA hard disk it will probably be SATA II (300Mb/s). If your motherboard is more than a year old it will probably only support SATA I (150Mb/s). To get this to work right you will probably have to put a jumper on the drive to make it run at 150Mb/s, and you may have to go to the manufacturer’s web site to find out how to do this.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656833.99/warc/CC-MAIN-20230609201549-20230609231549-00268.warc.gz
CC-MAIN-2023-23
1,180
5
https://www.oreilly.com/library/view/microsoft-iis-100/9781787126671/ced9f005-d541-43f1-9ca7-a2eaeafbb1a7.xhtml
code
First, we have to install some .NET supported components. - Open Server Manager on Windows Server 2016 and click on the Manage menu. Once in there, click on Add Roles and features. - Click on Next until you get the Select server roles wizard. Follow the exact route highlighted in this figure: - Expand Application Development. You have to select .NET Extensibility 3.5, .NET Extensibility 4.6, ASP, ASP.NET 3.5, ASP.NET 4.6, ISAPI Extensions, and ISAPI Filters. - Click on Next to finish. Now let's go on and upload the .NET framework web pages we created for demo purposes. - Open IIS Manager, click on Default Web Site, and you will ...
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669276.41/warc/CC-MAIN-20191117192728-20191117220728-00018.warc.gz
CC-MAIN-2019-47
639
6
https://www.nature.com/articles/d41586-020-03339-5?utm_source=sfmc&utm_medium=email&utm_campaign=greatmail&utm_content=newsletter&error=cookies_not_supported&code=6a07a15f-c300-4e56-8e83-3ceac8fe0d86
code
In 2018, shortly after following my PhD supervisor to the University of Glasgow, UK, I stumbled into a meeting room, convinced I had found a seminar on research methods. Soon I realized that this was actually the teaching-management group meeting. But, too mortified to say anything, I stayed and played it cool, smiling at the teaching staff sitting around me at the table. The embarrassment was worth it, however, because on this fateful day I met one of my future mentors, who would have a lasting impact on my PhD. I had already learnt over the course of my undergraduate and master’s degrees that mentors can be incredibly valuable for pursuing professional goals and personal development. Indeed, in a large-scale survey on the experience of PhD students, researchers found that those students who sought mentorship beyond their supervisor were more likely to complete their degree. My experiences have challenged my previously held image of mentorship and reframed for me what a modern mentor–mentee relationship looks like. I once thought that a mentor was a wise, ancient figure, akin to Yoda in Star Wars, whose every word is a gentle pearl of wisdom. Now I see mentorship as something more equal, as well as being hands-on and collaborative. Find your own mentors Joining a department mid-PhD was a challenging experience (I had followed my supervisor from Bangor University, UK, to complete my PhD in human–robot interaction). It helped me greatly when I identified role models to complement the regular meetings and supportive relationship I had with my supervisor. Mentors can take on different roles, from inspirational star to personal career guide, and the levels of personal involvement can dynamically change. After my accidental infiltration of the teaching-management team, I was lucky to cast Niamh Stack, the director of teaching and learning, in the role of career guide. My apologetic message to her about the mishap eventually led to collaborative projects, the founding of a running group and a supportive mentoring relationship. Her mentorship completely transformed my PhD experience. She would encourage me to take public-engagement opportunities and help to celebrate my successes. When I could not run a half marathon as a result of injury, she co-organized a remote relay race with my running group, resulting in them taking on the full 21 kilometres on my behalf. I was also able to recruit a personal adviser. Ruud Hortensius, who at the time was a postdoctoral researcher in our laboratory, helped me to develop an interest in making my research more open and reproducible, and often reminded me to keep a healthy work–life balance. This mentorship relationship emerged out of our discussions about open science, career aspirations and the shared trials and tribulations of learning a programming language. Before the pandemic, Hortensius’s door was always open. Nowadays I can still turn to my personal adviser for honest advice — although our chats currently take place on Slack and Zoom. The initial informal mentor–mentee relationship became a more formal co-supervisory agreement during my PhD, although it never lost its light-hearted tone. My experiences have taught me that instead of relying on one person to take on all your professional development needs, it is possible to recruit an ‘unofficial board of mentors’. Maybe you are looking for someone to support you in learning new methodological approaches or to help you develop your writing skills. Try to imagine different mentors in the positions of role model (a person in your outer circle who inspires you), career guide (someone who infrequently supports your career progression) and personal adviser (someone who provides psychosocial support). Without my own unofficial board of mentors, I would have missed out on scientific collaborations, job opportunities and personal development. And I have gained all these benefits despite never formally striking up a mentoring agreement. How to recruit mentors But what do you do if no mentor is available in your immediate environment, especially during the COVID-19 pandemic? When I was looking to grow my professional network, I reached out to staff members in the department and asked them for help and advice. They introduced me to contacts in their network and eased my anxiety about introducing myself to a stranger. Your supervisor or a fellow lab member might be able to connect you with a potential mentor if you struggle with making that first step. I found that most people enjoyed sharing their personal career journeys and were keen to help. In an e-mail to a potential mentor, I found that it helped to clearly and concisely outline why I was seeking their support. I read about their background and interests in advance, and thought about how my own career interests mapped to theirs. Not all of these contacts turned into mentoring relations, which is OK — it can be the start of a different relationship, and it’s all part of the learning experience. If mentors in your immediate physical environment are scarce, social media can provide opportunities. In fact, I learnt a lot from Twitter. Initially, I found engaging in the dialogue much too daunting, so I would follow relevant academic accounts, such as Career Conversations, and pick up some lessons along the way. And at the beginning, I was discouraged when my tweets didn’t reach many people. But I learned that establishing these relationships took more thought and care than simply using a conference hashtag or retweeting. There are many great guides that I relied on, such as ‘A Scientist’s Guide to Social Media’ by Jen Heemstra1 and Twitter for Scientists (2020) by Dan Quintana. Twitter has connected me with academics and professionals who have given me valuable advice, offered informational interviews and shared exciting opportunities. You can take advantage of this plethora of crowdsourced knowledge and advice: the hierarchies are flat, and often interactions online can lead to long-term mentor–mentee relationships. Even if ‘conventional’ mentorship relationships do not emerge through social media, the peer-mentoring network around #PhDchat has been immensely helpful in my PhD. Sometimes it just helps to know that someone else on the other side of the world is also struggling to make sense of their data. My experience with formal and informal advisers has shown me that the word mentor is not as elitist as I had previously thought (it turns out mentorship is not reserved for a select few Jedis). Mentorship can take many forms, whether it’s done over Skype, on social media or through e-mail. And it can be especially helpful early in an academic career, a period that is characterized by transitions and difficult choices.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00110.warc.gz
CC-MAIN-2022-33
6,811
22
https://community.virginmedia.com/t5/Networking-and-WiFi/Guest-Network-Frequency/td-p/3882204
code
Apologies if this has been posted before but I cannot see a definitive answer. I have a number of smart devices connected on my Hub 3 on the 2.4GHz frequency and they work fine. However, I want to move them to the guest network to separate them from my normal network. I have set up the guest network and noticed it is broadcasting at 5GHz which is a pain when the smart devices are on the 2.4GHz frequency. I tried switching off the 5GHz in the admin setting like I did when setting up the devices initially but the guest frequency is still broadcasting at 5GHz. So is it possible to set up the guest network at 2.4GHz and if so how do I do this?
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986703625.46/warc/CC-MAIN-20191020053545-20191020081045-00193.warc.gz
CC-MAIN-2019-43
647
4
https://supportforums.cisco.com/t5/other-security-subjects/vpn-passthrough-on-asa-5505/td-p/922610
code
Been trying to configure an ASA 5505 to pass PPTP traffic from remote clients out in the cloud, to a PPTP server behind the device in my internal network. Every time I try connection it fails, with the logs showing that my IP was identified and dropped as an "IP Spoof". I've disabled my ACL's and tried connecting, with the same effect. I've also tried programming static routes, to the same effect. I have enabled RPF on the outside interface with no effect. I've tried reading the command line configuration guide on how to program the device, but all the options are to program the device as the VPN server itself. Login to the FXOS chassis manager. Direct your browser to https://hostname/, and log-in using the user-name and password. Go to Help > About and check the current version: Check the current version availa... We have configured the outside and inside Interface with official ipv6 adresses, set a default route on outside Interface to our router, we also have definied a rule , which also gets hits, to permit tcp from inside Interface to any6. In Syslog I also se...
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649961.11/warc/CC-MAIN-20180324073738-20180324093738-00226.warc.gz
CC-MAIN-2018-13
1,084
10
https://www.freelancer.com/projects/PHP-Engineering/GPS-Monitoring-System-with-Wireless/
code
I need people who can work on developing a GPS Monitoring System with Wireless Networking in Mumbai. Please see added file along with this post. I require people who are extremely good at the required job. Only people from India may apply as it will include physical meetings too. The bid range is not fixed. Requirement is urgent. 3 freelancers are bidding on average ₹58333 for this job Hi, I am ready for this job. Kindly provide me kind of chat platform where we can discuss the requirement. Kindly open PM for more discussion on this. Thanks, Dushy Hi, I am an Embedded System Professional with 6 yrs industrial experience. I am interested in this project. Kindly refer your private message box for further details.
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808260.61/warc/CC-MAIN-20171124161303-20171124181303-00506.warc.gz
CC-MAIN-2017-47
722
4
https://xcp-ng.org/forum/topic/241/xcp-ng-7-5-0-release-candidate?page=3
code
XCP-ng 7.5.0 Release Candidate Nope I'm using Lili USB creator and so far it was working fine as I mentioned even if I use Citrix Xen server or XCP-NG. But sorry I just realize something and want to make a remark . This software has an option to hide some files . And because I was in a hurry yesterday and forgot to remove the option. I have a backup of my VM's , that is why will test it again today without this option. If it is working will update you , as well for those which are not comfortable to use command line Hello dear RC testers! We are releasing 7.5.0 final today, so here's how to go from 7.5.0RC to 7.5.0. Warning: this is only for people who installed the RC. Detailed procedure will be given in the release announcement for upgrading an XCP-ng 7.4 server Option 1, upgrade using the installation ISO Will work, although may be a little overkill since you already have a nearly-final 7.5.0. Option 2, update yum repository files and update the system # remove the RC-specific repository file if present rm /etc/yum.repos.d/xcp-ng-7.5.repo -f # install the 7.5 official repository file wget https://updates.xcp-ng.org/7/xcp-ng-7.5.repo -O /etc/yum.repos.d/xcp-ng.repo # update. If you had updated your RC regularly, this will install nothing, else you'll get a few packages yum update # if the kernel or xen-* packages got updated, reboot your server # that's it, you've got 7.5.0 final!
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100583.31/warc/CC-MAIN-20231206063543-20231206093543-00272.warc.gz
CC-MAIN-2023-50
1,405
11
http://pkgsrc.se/fonts/dbz-ttf
code
/dbz-ttf, Divide By Zero TrueType fonts created by Tom Murphy 7 20050114nb2, Package name: dbz-ttf-20050114nb2, Maintainer: pkgsrc-users This is a collection of TrueType fonts which be created for fun. You can use them for free to do anything you want, you just can't take the fonts and resell them (on a CD, for instance). Read the readme.txt if you're not sure what this means. Required to run: Version history: (Expand) - (2014-06-02) Updated to version: dbz-ttf-20050114nb2 - (2012-06-11) Package has been reborn - (2012-06-11) Package deleted from pkgsrc - (2006-02-06) Updated to version: dbz-ttf-20050114nb1 - (2005-10-05) Package added to pkgsrc.se, version dbz-ttf-20050114 (created) CVS history: (Expand) | 2015-11-03 21:45:25 by Alistair G. Crooks | Files touched by this commit (776) | Add SHA512 digests for distfiles for fonts category Problems found locating distfiles: Package acroread7-chsfont: missing distfile acrobat7-fonts/chsfont.tar.gz Package acroread7-font-share: missing distfile acrobat7-fonts/korfont.tar.gz Package acroread7-korfont: missing distfile acrobat7-fonts/korfont.tar.gz Package acroread9-chtfont: missing distfile \ Package acroread9-jpnfont: missing distfile \ Package cyberbase-ttf: missing distfile cyberbit-ttf/Cyberbase.ZIP Package cyberbit-ttf: missing distfile cyberbit-ttf/Cyberbit.ZIP Package pixel-sagas-startrek: missing distfile PS_Font_Fontana.zip Package pixel-sagas-startrek: missing distfile PS_Font_Montalban.zip Package pixel-sagas-startrek: missing distfile PS_Font_Probert.zip Package pixel-sagas-startrek: missing distfile PS_Font_Sternbach.zip Package pixel-sagas-startrek: missing distfile PS_Font_Trek_Arrowcaps.zip Package umefont-ttf: missing distfile umefont_560.tar.xz Otherwise, existing SHA1 digests verified and found to be the same on the machine holding the existing distfiles (morden). All existing SHA1 digests retained for now as an audit trail. | 2014-06-01 18:35:38 by Thomas Klausner | Files touched by this commit (216) | | Move fonts from lib/X11/fonts to share/fonts/X11. As discussed on tech-pkg. | 2012-10-03 20:28:33 by Aleksej Saushev | Files touched by this commit (154) | Drop superfluous PKG_DESTDIR_SUPPORT, "user-destdir" is default these days. | 2009-08-25 13:56:36 by Thomas Klausner | Files touched by this commit (34) | Change default for zip extraction to leave files as they are. Previously, zip extraction by default converted to lower case. Fix some packages that need it and remove -L from some packages that manually set it. | 2009-06-14 19:54:16 by Joerg Sonnenberger | Files touched by this commit (64) | Remove @dirrm entries from PLISTs | 2009-05-19 10:59:39 by Thomas Klausner | Files touched by this commit (383) | Use standard location for LICENSE line (in MAINTAINER/HOMEPAGE/COMMENT block). Uncomment some commented out LICENSE lines while here. | 2008-03-03 21:17:13 by Johnny C. Lam | Files touched by this commit (43) | Mechanical changes to add DESTDIR support to packages that install their files via a custom do-install target. | 2006-03-04 22:31:14 by Johnny C. Lam | Files touched by this commit (2257) | Point MAINTAINER to [email protected] in the case where no developer is officially maintaining the package. The rationale for changing this from "tech-pkg" to \ that it implies that any user can try to maintain the package (by submitting patches to the mailing list). Since the folks most likely to care about the package are the folks that want to use it or are already using it, this would leverage the energy of users who aren't
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191444.45/warc/CC-MAIN-20170322212951-00420-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
3,560
60
http://javadesign-patterns.blogspot.com/2013/08/oops-design-principles.html
code
Why Object Oriented Design Principles(OOPS)? Software design principles represent a set of guidelines that helps us to avoid having a bad code or design. These are GRASP and SOLID. Programming is full of acronyms like these. Other examples are : DRY (Don’t Repeat Yourself!) and KISS (Keep It Simple, Stupid!). But there obviously are many, many more, some of them will be listed here. Object Oriented Design (OOD) Guidelines: During the design process, as we go from requirement and use-case to a system component, each component must satisfy that requirement, without affecting other requirements. Following are points to avoid Design hazard during the analysis and design process. - Coupling Types and Degree - Optimization Techniques - Restructure (Refactoring) Consideration Ever heard of SOLID code? Probably: It is a term used by Robert Martin for describing a collection of design principles for "good and clean" code. To start understanding of these design principles, why not approach the problem from the other side for once? Looking at what makes up bad code. Sorry, but your code is STUPID! Nobody likes to hear that their code is stupid. It is offending. Don’t say it. But honestly: Most of the code being written around the globe is an unmaintainable, unreusable mess. So what characterizes such code? What makes code STUPID? - Tight coupling - Premature Optimization - Indescriptive Naming So what are the alternatives to writing STUPID code? The answer is, make it Simple, SOLID and GRASP. What are SOLID rules? - Single responsibility principle - Open/closed principle - Liskov substitution principle - Interface segregation principle - Dependency inversion principle What is GRASP ? GRASP stands for General Responsibility Assignment Software Principles / Patterns, which are the following: - Information Expert - Low Coupling - High Cohesion - Pure Fabrication - Protected Variations
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122041.70/warc/CC-MAIN-20170423031202-00113-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,907
32
http://christianity.stackexchange.com/questions/tagged/blessings+priesthood
code
to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site In Catholicism, what distinguishes a priestly blessing from a lay blessing? Lay Catholics are encouraged to bless their children [and homes?] using particular formulas or patterns. As I understand it, these ought to differ in "manner" and formula from blessings performed by a ... Feb 25 '13 at 19:24 newest blessings priesthood questions feed Putting the Community back in Wiki Hot Network Questions Greatest prime factor of 4^17-2^28 Is there an alternative to traditional exams? Printing the most used words from phrases Realised afterwards that I committed mild plagiarism in a paper I wrote as an undergraduate, what should I do? Create XLS or CSV from file names in folder on Windows? Can He-4 atoms create black holes? Why does my daughter smile and start to laugh at me when I am noticeably really angry at her? Can "cat-ing" a file be a potential security risk? What is 'YTowOnt9'? Python Equality Check Difference How to teach algebraic expressions? Blender 2.70 Fly Mode Camera: How Do I Change the Camera "Height" For Gravity? My child is ignoring my timeouts and walking away from them. What are some options for handling this? Handling plagiarism as a TA Slang names for souteneur How do I begin proving this binomial coefficient identity? How is a current controlled voltage source different from a resistor? Why Only Four Primary Partitions What is "%&-line parsing enabled"? Is it better to create a class instance, or create an instance each time a method is called? Full day training in a remote location - hotel accommodations Is it probable that planets will stop orbiting in ellipses? Enforcing apt-get version continuity on Ubuntu more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Exchange Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00514-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
2,291
52
https://www.elecrow.com/wiki/index.php?title=Crowbits-Switch
code
The switch is a digital input module, it comes with its own mechanical locking function. It is useful in controlling power connections most of the time. - Own mechanical locking function - Digital input module - Easy to use - Operating Voltage: 3.3V DC - Supply mode: Crowbits Power Module - Dimensions: 31.5(L)*24.5(W)*13(H) mm 1. You also need to prepare a power module, such as Crowbits-Power Supply, and an output module, such as Crowbits-LED. 2. The connection mode is shown in the figure, but the signal feet of the input module and the output module must be connected. 3. Then, turn on the power, press the button of the self-locking switch, the LED behind it will light up.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922463.87/warc/CC-MAIN-20201031211812-20201101001812-00235.warc.gz
CC-MAIN-2020-45
681
10
https://move2modern.uk/index.php/about/
code
Hi I’m Andy Jones and a Microsoft MVP for Enterprise Mobility (2022-2023). I have over 25 years experience with IT working for BT Enterprise. For the last few years I have been a Microsoft Technical Architect working within the M365 Modern Desktop services Cloud practice. Prior to that I worked across various technical teams and technologies developing, managing and consulting. From the early days I have experience with software development and large scale infrastructure projects and played a key role in developing my company’s intranet services. Along my route I have developed iOS Apps, written automated trading software and managed an online store. I currently live in Hertfordshire but love to travel when ever possible. Football used to be a big passion of mine but nowadays to keep fit I can more regularly be found in a cold field in the early mornings doing circuit training or running. I started a new youtube Channel called GetModern (Now renamed to CloudManagement.Community) during COVID lockdown with an ex-work colleague Dean Ellerby which complements my blog posts and includes cloud adoption sessions, product updates, community events and guest speakers. Ultimately we are creating a collaborative community of Cloud professionals. I am active on Twitter and other social sites so please get in contact at @Andy_69Jones or email me on [email protected]
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476409.38/warc/CC-MAIN-20240304002142-20240304032142-00436.warc.gz
CC-MAIN-2024-10
1,385
4
http://www.sevenforums.com/performance-maintenance/241533-programs-randomly-lock-up.html
code
Hello all. Longtime visitor - this forum has helped me so much - first time posting. I'm not the most technologically minded person, so please bear with me. My specs (that I know): Toshiba Satellite L745 / Windows 7 Home Premium 64-bit / 4GB RAM. If more are needed, please let me know and point me in the direction to find them. I rarely run more than two or three programs at a time. At the moment, I have only Chrome and AOL Instant Messenger. If I send/receive an IM, the window freezes for a few seconds. If I leave it alone it gets to working normally, but in the meantime the window is whited out. Same thing with Chrome. If I open a new tab - and I rarely have more than two tabs open at the same time - it freezes temporarily. Any program that I run does this. If I'm working on a document and save it after typing a paragraph the program (OpenOffice) locks up for a moment or two. If I have iTunes running and want to switch songs, it freezes for a moment before switching. If I'm watching a video in Media Player, it takes a long time for it to load the video. The audio will work fine but the video will slow down then speed to catch up to the audio, or vice verse. When I press the Win logo to bring up the start menu, that also freezes. I have to click it again to bring it up after about 30 seconds, and if I click a submenu (recent items, all programs) it freezes again. This usually only happens the first time I click it after starting up, and it runs normally afterwards. All the programs do (usually) go on to run normally, but there are times I have to go into task manager to end the program (via the Processes tab). Because my boot and shut down time is slow (in excess of five minutes at times) I have the laptop set to Hibernate. I hardly ever shut down/reboot unless recommended by uninstalling a program or a recent Windows Update. What I've done so far: Microsoft Security Essentials - Up to date, run once a week. About a month ago a trojan "win32/Jpgiframe" was found, and removed. Other than that, no problems detected. Malwarebytes - up to date and run once a week. Aside from tracking cookies, no problems have been found. That's pretty much it. It's a small nuisance (no BSOD or failure to boot at all, knock wood), but it drives me crazy. If anyone can help me, I bake a mean chocolate chip cookie. Thanks in advance,
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164695251/warc/CC-MAIN-20131204134455-00044-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
2,352
13
https://isqed.org/English/Archives/2024/Technical_Sessions/126.html
code
In this article, we present a tool to analyze cache timing vulnerabilities in trusted execution environments. First, we present a platform based on the well-known gem5 simulator capable of booting GlobalPlatform Compliant TEEs for ARMV8 architecture. Next we present the associated GDB instrumentation which allows us to dynamically reconfigure the gem5 simulator and access detailed micro-architectural state after each simulation step. Unmodified Linux/TEE binaries can be run on this platform, from which detailed execution and cache access traces are gathered and analyzed on-the-fly. We demonstrate the usage of this tool, first with an in-vitro experiment to explain the concepts of Key-Cache lines, Key-Execution Points, a method to rank these lines in an increasing order of vulnerability, and code coverage. We show that real vulnerabilities can be detected with our tool, in an otherwise constant-time RSA implementation inside an open Source TEE called OP-TEE.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818452.78/warc/CC-MAIN-20240423002028-20240423032028-00469.warc.gz
CC-MAIN-2024-18
971
2
https://forum.openstreetmap.fr/t/rfc-part-3-foot-scale-aims-to-describe-global-paths-in-a-more-helpful-and-informative-scale-than-sac/21108
code
The current sac_scale has… issues. The value names are too specific (people tag easy trails in the alpine as alpine_hiking, mountain_hiking can easily occur outside of mountains) and it’s geared towards mountaineers vs normal hikers and recreational walkers (half the values involve some form of climbing). Outside of some European countries it’s not well known at all, so there’s not any real value from adopting it globally (it can and still should be used in those places alongside this scale) and it doesn’t attempt to cover non Alps concerns. Take 1 is here: RFC: hiking_technique key (or a better name!) to describe movement on paths by hikers and then grew too long for people to catch up on. Take 2 went in a different direction (more of a walking_scale) but had some great insight: RfC: New Key foot_scale=* ("now for something a bit recreational") - #108 by Hungerburg This take 3 merges the two while keeping more to the spirit of the first and is fairly well finalized. I’ll toss it up on a wiki soon (hopefully) with example photos. 6 posts - 4 participants Ce sujet de discussion accompagne la publication sur https://community.openstreetmap.org/t/rfc-part-3-foot-scale-aims-to-describe-global-paths-in-a-more-helpful-and-informative-scale-than-sac/109193
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476442.30/warc/CC-MAIN-20240304101406-20240304131406-00847.warc.gz
CC-MAIN-2024-10
1,281
6
https://www.programmableweb.com/sdk/insight-sdk
code
InSight Software Development Kit (SDK) is an individual face analysis software toolkit which uniquely combines emotion recognition, demographics and eye tracking technologies in one solution. It can track tiny movements of facial muscle in individuals’ face and translate them into universal facial expressions like happiness, surprise, sadness, anger and others. Additionally, the SDK can be calibrated to understand where the user is looking at in the screen, and is also able to measure demographics like age and gender. All this by using a simple camera. InSight is a cross-platform software toolkit which can work on the most popular desktop and mobile operating systems (Windows, Mac, Linux, iOS, Android). Most of us would know exactly what the phrases 'slug bug', 'punch bug' or 'punch beetle' mean. They're normally associated with sudden shrieks and a punch in the arm. I'm talking about that old car game that created hours of fun for any kid on a boring car journey. The object of the game was to be the first to spot a Volkswagen Beetle, call out 'slug bug' and punch the other player in the arm. Using Google Street View Imagery via the Google Maps API, Volkswagen has brought this much-loved, old tradition into the modern digital world. Now anyone, anywhere can play the game without ever having to be in a car. Open source mapping firm Mapbox has announced new Vision SDK designed to help developers build AR navigation experiences, such as iOS and Android-based heads-up displays, by "bridging the phone, the camera, and the automobile." The Mapbox Vision SDK takes advantage of ARM's Project Trillium AI tech.
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511122.49/warc/CC-MAIN-20181017090419-20181017111919-00222.warc.gz
CC-MAIN-2018-43
1,630
4
https://www.igeek.com/Austria
code
Austria: Germany Light. 1/3rd less Calories than regular Germany. There's a lot more to add about this beautiful country and culture, both good and bad. But for now, it's just a placeholder for links and things of interest. European think they have freedom, and in some ways they do (and have more than some Americans). But in many other ways they do not. One example is free speech, they think they have free speech, then they arrest people for simple things like having bottles of wine with Hitler's image on it. While I'm the world's biggest anti-Fan of Hitler. I do understand the historical interest or novelty, or just a guy that want to collect offensive things for pure shock value. So while I don't know all the details, I do know that if you're not free to collect simple things like bottles of wine that you want, then you don't really have freedom.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818337.62/warc/CC-MAIN-20240422175900-20240422205900-00239.warc.gz
CC-MAIN-2024-18
860
4
https://community.retool.com/t/image-as-container-bckground/5335
code
Would be nice to be able to natively add an image as a container/ header/module/modal background. Using some css overrides I can accomplish this but seems like a helpful native feature. background-image: url("https://----.s3.amazonaws.com/-----.jpg") !important; background-size: contain !important; background-position: center !important; background-repeat: no-repeat !important;
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649439.65/warc/CC-MAIN-20230604025306-20230604055306-00572.warc.gz
CC-MAIN-2023-23
380
6
https://community.oracle.com/message/10191806?tstart=-1
code
Is it possible to install "Oracle VM VirtualBox" on devops machines (the machines that are allocated to us from devops.oraclecorp.com), given that devops machines themselves are virtual machines? When I install VirtualBox on a physical machine one of the prerequisite I perform is to enable CPU Virtualization support in the BIOS. Are there any prerequistes to install VirtualBox in devops virtual machines? A related question. Which virtualization software does devops use? Is it Oracle VM Server or Oracle Virtual Desktop Infrastructure?
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820700.4/warc/CC-MAIN-20171017033641-20171017053641-00489.warc.gz
CC-MAIN-2017-43
539
3
http://blog.vrplumber.com/b/2005/01/27/why-isnt-the-mozsvg/
code
I've been testing my changes since switching back to Linux by using Firefox. That's fine, but there's no AMD64-compatible SVG plug-in that I'm aware of for Firefox. There's also no flag for building MozSVG into Firefox that I can find, so, it's off to Mozilla (the suite). Problem there is that it has started taking 30-40 seconds to render an SVG-containing page in Mozilla, with crashes and weird rendering artefacts (it was working fine the last time I was using Mozilla for development, of course). Might be a KDE 3.3.2 problem (versus 3.3.0), I suppose, but that seems strange. I suppose I should submit a bug report, but the last time I did one for the released MozSVG I was told rather curtly that it was hopelessly out of date (IIRC that was 1 or 2 weeks after the official release). I guess I could build Mozilla+MozSVG from CVS, but at that point I probably have to cut my losses and go back to Firefox on Win2K. On a happier note, during my Linux abscence, the CMU Sphinx speech-recognition library has moved from being a royal pain to install (due to the JDK required) to a simple no-questions-asked emerge. The SphinxTrain build failed, though. Strangely, one of the dependencies for Blender (the Yafray renderer) also doesn't build out-of-box. Radiance, btw, built without complaint once I added ~amd64 to its list of platforms. Pingbacks are closed.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100531.77/warc/CC-MAIN-20231204151108-20231204181108-00826.warc.gz
CC-MAIN-2023-50
1,364
5
https://lists.qt-project.org/pipermail/interest/2015-January/014807.html
code
[Interest] porting to Qt 5.4 - MOC very slow on Windows thiago.macieira at intel.com Thu Jan 15 18:48:16 CET 2015 On Thursday 15 January 2015 14:16:37 Keith Gardner wrote: > Do you have the /MP flag set for your project? Even though MOC takes > longer, this will at least allow you to run one instance of MOC for every > core on your system in addition to reducing build times in general. The /MP flag is a compiler flag, telling the compiler to be multithreaded. cl.exe doesn't compile in each of the instances of the compiler; instead, it finds running instance and tells it to compile the source that it was given. That's how cl.exe implements /MP. After a while, that compiling instance exits. Moreover, that doesn't apply to If you want to make full use of your CPU cores, use jom.exe, not nmake.exe. Thiago Macieira - thiago.macieira (AT) intel.com Software Architect - Intel Open Source Technology Center More information about the Interest
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611089.19/warc/CC-MAIN-20210613222907-20210614012907-00557.warc.gz
CC-MAIN-2021-25
947
16
https://trashnet.xyz/about.html
code
My name is Stephen Panicho and I’m a DevOps Engineer. Because hey, what systems person isn’t these days?! I created this site to share not only my discoveries and solutions and blah blah blah but, more importantly, to have a comfortable little Web 1.0 world to publish my thoughts without worrying about the social media engagement stuff that’s poisoned our brains over the last decade and a half. But hey, if you do want to interact, you can contact me at [email protected]. You can also check out my Github.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00043.warc.gz
CC-MAIN-2021-31
515
4
https://gamedev.stackexchange.com/questions/11491/flash-as3-tower-defense-problem
code
Hello you should consider using the dot product to compute the angle between two vectors. - A is the vector from the tower to the max distance point aligned with the first enemy. - B is the vector from the tower to the enemy to test (one of the enemies list). θ = acos( (A . B) / (|A| |B|) ) Thus if θ is equal or close to 0° then they are parallels and in the same direction. You could also consider using the cross product, but it's fit better for 3D. One of the use of the cross product is to determine if two vectors are parallels by checking whether the resulting vector is close to the zero vector, you can determine which enemies in your enemies list are aligned on the same vector as your first targeted enemy. It normally does apply to 3D vectors only, but as far as I remember you can use it with 2D vectors using a 0 value as Z value. (Take care they can be parallels and in opposite direction, ie: in the opposite side of the tower direction. You can use crossproduct to determine if they are the same direction) Still considering the same A and B as in the dot product part: A x B = [Ax, Ay, Az] x [Bx, A x B = [Ay*Bz - Az*By , Az*Bx - Ax*Bz , Ax*By - Ay*Bx ] If this result vector is equal to (or "close to", depending of you needs) [0,0,0] then they should be parallels. then compute the dot product to verify if they are the same direction. Finally, could also consider computing the distance between the segment (from tower position to the max radius using the first enemy direction) and the point (your current enemies list) : http://paulbourke.net/geometry/pointline/, or the distance from a point to a plane if you use 3D and flying enemies on top of grounded ones. Using the above techniques (distance from a point to a plan/segment), you should also consider previously reducing the number of enemies in your current enemies table using a simple vector dot product (the vector from the tower to the max distance in the first enemy direction, dot-product the vector from the tower to the enemy to test). If your dot product is greater than 0 they point to the "same direction", if it is equal to 0 the vectors are perpendiculars and if the dot product is lower than 0, the vectors are in "opposite" directions. This way you can reduce the number of enemies to consider by keeping only those in the front direction... I beg this is useful only with a huge number of enemies, so it's up to you to decide if it worth it or not. Edited: to reorganize the ideas... I beg your pardon if this is not clear enough, as I'm not really fluent at English, it could be unclear... so if I'm not I will try to reformulate.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100164.87/warc/CC-MAIN-20231130031610-20231130061610-00392.warc.gz
CC-MAIN-2023-50
2,631
16
https://solutions.filemaker.com/made-for-filemaker/de/search.jsp?category=3&offset=10
code
Import.log file reader. Shows you what objects were copied and what notices or errors were produced in the copy/import process in an intuitive layout. Addendum is an application that lists the functions, script steps, script triggers, error codes and glossary of FileMaker. Sync FileMaker to iOS, OS X, Google Calendar, and Exchange/Office 365 calendars. Turn a FileMaker Server into a shared calendar server in a few clicks **Try FMPerception FREE for 14 days!** Get the best performance out of FileMaker® Pro by optimizing your solutions and achieve 100+% speed gains in almost no time. CrossCheck™ imports data from Database Design Reports (DDR) into a FileMaker database. The data will then be structured, filed, upgraded and linked. It's an AppleScript application which help FileMaker Admin uploads & opens the fmp12 files in the Server machine. Documentor is an inexpensive developer tool that imports the DDR into a FIleMaker database. fmFlare is a new way to fire up your FileMaker development. Each “Flare” is a pre-built integration code module that simplifies common difficult and advanced programming tasks so FileMaker can talk to powerful external web services. fmFlare is all native FileMaker scripting and does not require any plug-ins, so it will run effectively on FileMaker Pro Advanced, Go, and WebDirect. fmFlare saves time and money in implementing advanced features.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313996.39/warc/CC-MAIN-20190818185421-20190818211421-00050.warc.gz
CC-MAIN-2019-35
1,397
9
http://stackoverflow.com/questions/659722/where-would-i-keep-my-txt-files-for-inclusion-in-a-flex-app
code
I'm working on a flex app, and I have Flash & AS3 experience up to now. I have text file I need to request using URLLoader, so I placed it in the same directory as the SWF deploy > maps > map1.txt but when run the SWF I get the following error *** Security Sandbox Violation *** SecurityError: Error #2148: SWF file file:///Users/him/Documents/Clients/Geekery/Bounce/deploy/Bounce.swf cannot access local resource /maps/map1.txt. Only local-with-filesystem and trusted local SWF files may access local resources. at flash.net::URLStream/load() at flash.net::URLLoader/load() at com.geekery.Bounce::BounceMap() at Bounce/loadMap() at Bounce() Which seems odd to me. Is there a special place I should be keeping files like this? Or is there some way I can allow files to be loaded form the same directory as the SWF?
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636213.71/warc/CC-MAIN-20150417045716-00177-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
814
5
http://www.linuxquestions.org/questions/linux-newbie-8/minicom-on-ok-ancient-mandrake-permissions-117045/
code
Linux - NewbieThis Linux forum is for members that are new to Linux. Just starting out and have a question? If it is not in the man pages or the how-to's this is the place! Welcome to LinuxQuestions.org, a friendly and active Linux Community. You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today! Note that registered members see fewer ads, and ContentLink is completely disabled once you log in. If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here. Having a problem logging in? Please visit this page to clear all LQ-related cookies. Introduction to Linux - A Hands on Guide This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter. For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own. Click Here to receive this Complete Guide absolutely free. Hi, first of all, I admit I'm using Mandrake 6.0 Yes, I'm thinking of changing/updating, but maybe not on this machine. I've tried everything I can find to get the minicom program to run for a user, not as root. (Running as root isn't a good idea, no?) I've checked the permissions on the modem, /dev/ttyS1 I've set the modem in the minicom setup to be /dev/ttyS1 and not /dev/modem (which has different permissions) /dev/ttyS1 has root for owner tty as group my user (say its dauser) has tty in his/her groups Still, when dauser tries to run minicom, it gets this permission denied message. There's supposed to be a file in /etc/minicom.users (There is a dosemu.users file there) I put one there, following the sample in the minicom man pages with the line dauser dfl or dauser dfl ttyS1 It still doesn't work. I've looked in other places wher this minicom.users might be, and its not there. (i.e. running minicom -s (for setup) doesn't generate it. This is embarassing and also I can't think of anything else to do. I may be back asking this same question later for another version (less antique) or another distro. I'd still like to solve this one. Any archaeologists around? Distribution: Fedors Core 3, Red Hat 9, Mandrake 8.1 I assume minicom works for you just fine as root. You are correct in setting the serial port, but have you checked the permissions on the minicom file itself? I used minicom yesterday and here are my permissions -rwxr-xr-x 1 root uucp /usr/bin/minicom c-rwxrwxrwx 1 root uucp /dev/ttyS0 As a regular user I can use the command minicom -s My minicom ver 2.00.0 (came with RedHat 9) Hope this helps!
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118743.41/warc/CC-MAIN-20170423031158-00480-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
3,236
42
https://answers.popcornfx.com/2387/unity-discovery-demo-scene-not-working-unity-2017-3-0f2
code
I've gotten a couple different errors while trying to get the discovery scene to work. I imported the free Discovery Pack as well as the Editor SDK, but still doesn't seem to run in Unity! A coworker of mine has gotten it to work on her computer, but we can't identify what's different about our projects. I even uninstalled everything and tried again. https://assetstore.unity.com/packages/vfx/particles/popcornfx-discovery-pack-26440 There's a verification error that shows up when I'm trying to play a particle effect in the General Scene: [PKFX] Signature verification failed for fx Explosion_vrandFunction.pkfx There might also be a problem with the pkmm file as it seems PointClickSpawn.Update() can't find it. I've also gotten an error referring to Unity VR: TypeLoadException: Could not load type 'UnityEngine.VR.VRSettings' from assembly 'PKFx-Runtime'. Any help would be appreciated!
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997731.69/warc/CC-MAIN-20190616042701-20190616064701-00264.warc.gz
CC-MAIN-2019-26
893
7
https://www.careerjet.sk/jobad/ske95cb741bbb0a0a6f56f41792ba28144
code
Accenture Advanced Technology center is looking for technology enthusiasts ready for a challenging IT career and focus on application development, systems administration work, and software maintenance providing practical programming and technology implementation for business. You could have the opportunity to help clients develop applications with a cloud-first, digital- first mindset, generate the next wave of innovative technology in our state-of-the-art Accenture Labs, and provide end-to-end service and project delivery across all our Accenture Delivery Center engagements using rapid development principles such as Agile and DevOps to make software faster, flexible and more liquid. WHAT YOU WILL DO - Participated in full life cycle testing activities throughout all phases of the SDLC - Developed Data Driven Tests and automated functional testing - Creation of various Prove of Concepts with TOSCA • Responsible for Writing Test cases for each new sprint stories - Addressed gaps in the existing defect management process and implemented new procedures - Coordinating with team and deploying the code to UAT Handling dynamic values in test cases in TOSCA; WHAT WE EXPECT - 5+ years Experience with Automation Testing - Real experience with TOSCA, SoapUI, JMeter, SQL - Advanced English skills - Strong communication, analytical and problem-solving skills - Jira, ALM, Java, Python - German skill Accenture offers a competitive compensation package. As required by the Slovak law we state, that the legal monthly minimum gross base pay starting from 1 500€ to 2 700€ depending on your professional and personal qualifications in the required areas. - Guaranteed Paid overtime or overtime vacation - German language bonus up to 400€ monthly/gross depending on language proficiency and level of seniority - Cafeteria - Budget for benefits based on your choice - Transparent bonus structure - Refer-a-Friend - get a bonus in the employee referral program - Loyalty presents - Flexible working arrangements (Flex-work, Home Office) - Option to participate on projects abroad - Wide range of leading-edge trainings (including various language - Teambuilding activities - Regular performance management and evaluation process - Ongoing career guidance and mentoring - Multisport card which grants an access to hundreds of different - 3 Extra Days-off - Sick leave compensation - Family oriented benefits (Wedding and New baby bonus, Inhouse day care) Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process. Accenture is committed to providing veteran employment opportunities to our service men and women.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878879.33/warc/CC-MAIN-20201022024236-20201022054236-00594.warc.gz
CC-MAIN-2020-45
2,694
34
https://coding.abel.nu/tag/maintenance/
code
A team in trouble would probably say that they would very much welcome expert help – but would they really? Are they ready to face the brutal facts of the state of the project? An important part of the psychology behind forming a team is the distinction between “we in the team” on the inside and the rest of the world on the outside. To some extent this is good: A team with a strong spirit will be able to handle any obstacle in the outside world. But in some cases it can backfire, when the team stop looking at themselves, but only look at the outside world to explain problems. In fact; when a team is in trouble, they will in most cases first find external reasons for the trouble. The customer demanded a huge change, with a very tight deadline. The architecture in this legacy system is so hard to work with that it takes forever to make changes. The first one is external – the customer asked for a change, but what about the second? It might be that the system was recently handed over to the team in which case it is external. But what if the team has been in charge for maintenance of the system for a year? The fact that the legacy system that they took over had an architecture that’s hard to work with is external. The fact that they haven’t done anything to it in a year is at least partly internal to the team. Looking deeper at the first example that might be internal as well. The change may be huge to the team’s standards, but if they had a flexible architecture and proper automated tests in place it might not have been such a big issue. If the team has worked in isolation for too long, they are less likely to see that their architecture is inflexible and that their testing is sub standard. When working with legacy code I usually start by classifying all modules of the system based on the urgency for rebuild. The classification helps making sure that no work is wasted improving details of code that will be discarded later. I use four levels to classify the code: The classification of different parts of the code is communicated to everyone on the team so that everyone knows the rules for the different parts of the system. I can’t tell how often I’ve been in a meeting with a financial manager, trying to explain the technical considerations of a project. To be honest, there have been times when it has been really, really hard to keep it technically simple enough, while still being accurate. That’s the lucky times. The bad times is when I’ve completely failed and left the room without any funding at all. I read a blog post (in Swedish) written by a colleague of mine that opened my eyes for the term technical dept. It was first formulated by Ward Cunningham. Martin Fowler has written a great article that explains the concept. Reading a bit more, following links and searching I found a number of ideas on how to describe software development in financial terms.
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866772.91/warc/CC-MAIN-20180524190036-20180524210036-00615.warc.gz
CC-MAIN-2018-22
2,925
11
https://www.opendap.org/software/hyrax/1.6.0
code
The 1.6.0 beta release: - This is a beta release; binary builds will be made available soon. - NCML: Hyrax now supports NcML. Wit the NcML handler you can add new attributes and variables to any of data sets served with Hyrax. The initial version of the NcML handler also supports aggregations of multiple files into a single data set. - Automatically generated THREDDS catalogs now have a 'file' service - Improvements to the netCDF file response (Hyrax 1.5 introduced netCDF as a return type for Hyrax). - HDF5 now has support for HDF EOS and a special mode for CF 1.0 compliant files. - HDF4 Bug fixes for HDF EOS. - Bug fixes and feature additions (Beta Release) Individual Binaries will be added when they are available. What to get: You will need the OLFS (which is a Java binary and runs on any computer with Java 1.5) and one set of the BES, dap-server and one or more data handlers. We're including libdap here as well to cut down on the amount of hunting around you need to do; libdap is required for all of our software written in C++ (which includes the BES and the handlers). - If you are updating from an older version of Hyrax (older than Hyrax 1.5.0) you will need to update your olfs.xml file! See $CATALINA_HOME/webapps/opendap/initialContent/olfs.xml for the default olfs.xml for Hyrax 1.6.0. For more detailed information see the Hyrax Configuration Documentation - Version 1.6.0 beta: - Note: All of our software downloads are signed using GPG. Ask us for the public key if you want to verify the signatures. - libdap 3.10.0, gpg signature - bes 3.8.2, gpg signature - General purpose handlers 4.0.0, gpg signature - NetCDF Handler 3.9.0, gpg signature - FreeForm Handler 3.8.0, gpg signature - HDF4 Handler 3.8.0, gpg signature - HDF5 Handler 1.4.0, gpg signature - Fileout NetCDF Handler 1.0.0, gpg signature - Wcs Gateway Handler 1.1.0, gpg signature - NcML Handler 1.0.1, gpg signature - Version 1.6.0 beta - Contributed: Sharing your binary builds for those operating systems not available here.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00179.warc.gz
CC-MAIN-2022-40
2,021
24
https://www.ncrna.org/
code
Bioinformatics Tools and Databases for Functional RNA Analysis ncRNA.org is the portal site for bioinformatics tools and databases spcialized for functional RNAs. The developments in this cite were partially supported by “The Functional RNA Project” funded by New Energy and Industrial Technology Development Organization (NEDO) since 2005. Web Tools for RNA analyses Results of various analysis on RNA secondary structures in one page! Functional RNA Database (archived at NBDC)
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473598.4/warc/CC-MAIN-20240221234056-20240222024056-00020.warc.gz
CC-MAIN-2024-10
483
5
https://ell.stackexchange.com/questions/213260/do-i-need-to-know-something-except-phrasal-verbs-and-collocations-to-use-colloqu
code
What do I really need to know about colloquial English except phrasal verbs and collocations? Vocabulary and phrases would also be necessary. Colloquial English extends beyond phrasal verbs in terms of vocabulary. There's a concept called register, which you may already be familiar with, which essentially rank orders words in terms of their formality. They generally fall into three categories: formal, neutral, and informal. For example, "What's up, mate?" vs "Good evening, Sir" as two types of greeting. Regarding collocations, I wouldn't say that is really directly relevant to the topic of formal vs colloquial English. It's more about standard/common English. If you don't use typical collocation patterns, then that won't make you sound less formal, but rather more foreign. "I'm having a party next week" or, "I'm throwing a party next week" not "I'm making/doing a party next week" (A typical 'mistake' Spanish speakers make)
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475311.93/warc/CC-MAIN-20240301125520-20240301155520-00472.warc.gz
CC-MAIN-2024-10
936
5
http://weeklybeats.com/bryface/music/twisted-registers
code
thought i'd experiment with famitracker again - I think i finally managed to come up with something i'm really happy with; most other NES tunes i've made in the past were only "serviceable" at least from a sound design perspective. prior to this i hardly ever used the DPCM channel, so this tune began mainly as an experiment to learn the wave import workflow. i started with the modulated chord stabz - those were rendered via Charlatan VST running through reaper. i spent most of the time tweaking the parameters to get a feel for which frequencies would most likely survive the DPCM conversion. i was also hoping to make a hardware recording of this, as i fixed my famicom to provide a cleaner mic-less signal out of the TNS-HFC3 unit that i have. what i didn't realize though was that the cart doesn't actually support custom NSF timer speeds. darn! hopefully next week i'll record something that doesn't cause the cart to die. This submission is licensed by author under CC Attribution Noncommercial No Derivative Works (BY-NC-ND)
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190236.99/warc/CC-MAIN-20170322212950-00267-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
1,035
4
https://forums.macrumors.com/threads/qt-playback-issues.626897/
code
SPECS: G4 800mhz AGP- 768MB RAM- OSX 10.4 - QT ver.7 ISSUE: When playing a quicktime file (ie. 720x480 pxls / 30fps) it stalls and the playback looks more like a choppy 10 frames per second. Smaller dimensions (320x240) look fine. QUESTION: Is there a solution to have large videos playback more smoothly on my G4? Is this a ram issue? Do I need to get a better video card, if so, which one to get? Or is this going the be the best as it gets? Please help if you can. Thanks!
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660877.4/warc/CC-MAIN-20190118233719-20190119015719-00544.warc.gz
CC-MAIN-2019-04
475
1
http://www.geekstogo.com/forum/topic/85774-free-partition-manager/
code
Free Partition Manager Posted 19 December 2005 - 05:04 PM Posted 19 December 2005 - 06:27 PM Posted 19 December 2005 - 07:44 PM You should backup all the data first, just in case, then defrag and optimize the partition to get all the data together. I would guess that increasing the size can only be done if there is space available after the partition, which means you would need to remove the next partition. I doubt that it could resize from the start instead of from the end. Posted 26 December 2005 - 10:24 AM Posted 26 December 2005 - 03:46 PM ...saying that I could download knoppix just for the qtparted thingummy, partition, then clear it out? (presuming I'm going to be die-hard over my new Ubuntu CDs...) Not quite. Knoppix runs off of a CD, so there is nothing to clear out. It is a good tool to have, even if you run Windoze, but especially if you run Linux. You could run Knoppix all the time, but I don't do that. Posted 28 December 2005 - 12:13 PM Posted 28 December 2005 - 03:07 PM Knoppix is an OS and so you need to be running that. Make sure you first do a backup, then defrag the disk. After that you can boot Knoppix from the CD and run qtparted. You will need to run as root, so after it boots, you need to open a window, then type "su" to become root, then type "qtparted" and hit return. The GUI will come up and hopefully you can figure it out from there :-). Posted 31 December 2005 - 09:00 PM I run knoppix live, and from the OS run Qparted. Qparted is not a program that runs from startup, in other words. Posted 31 December 2005 - 10:45 PM Yes, you would boot the Knoppix CD and run the program. Posted 16 January 2006 - 11:44 AM It just seems weird to me that an OS can repartition a drive when it's in one of them. Would Knoppix be able to repartition if it was installed? Posted 16 January 2006 - 12:29 PM 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510932.57/warc/CC-MAIN-20181017002502-20181017024002-00304.warc.gz
CC-MAIN-2018-43
1,910
23
http://www.net-sensing.com/TieredView.html
code
Advanced Data Distribution in One Day or Less with MQTT Device Messaging, Embedded Software Architecture, PC Configuration and Analysis Tools and More on NetBurner Devices. Layered Software – Configured For NetBurner and PC Systems NodeX software for NetBurner devices is available in three levels from basic a basic NST MQTT client implementation to complete embedded software architecure with device managment, intra / inter-node device routing and with message translation. NST's threaded MQTT C++ Class implementation optimized for NetBurner developers / integrators. A standard COTS MQTT Broker for use on Windows with minor useablity modifications. Sample code framework to kickstart your distributed data application. Prototype and field connectivty to / from your "Thing" (model, software, I/O protocol driver, etc.). All the NodeX-LT features plus the NodeX C++ embedded application software framework for intranode / internode device management and data routing capabilites. A Interface Control Document (ICD) Device / Object Data Dictionary framework which completely describes every detail of an device interface (messages, fields, data types, LSB, scale factors, units, enumerations, endianess). NodeX NetConfig a PC graphical user interface tool to configure the network, nodes, message topics and routing requirements.< NodeX NodeLogger a PC tools for logging, data extraction, plot, quicklook logger, grid display, data viewer.< NodeX NetView a PC tool to dynamically access Node data in plots, grids, displays and quicklook logs in real-time - No Coding Necessary All the NodeX-DM features plus DataMapper device message translation technology. Map and translate from source device messages to a complete different destination message without any code development! Save Bandwidth & Cloud Data Cost! With the DataMapper option you can throttle device(s)' message rates, merge messages from multiple streams, control content and publishing rate based on data triggers, conditionals or desired rate, create new messages and alter it all without any modifications to the embedded code! Note: NST’s embedded software framework is suitable for use non-resource constrained NetBurner modules. Note: Things Change Rapidly! Specifications, Capabilities, Features and Availability Subject to Change Without Notice and May Be Restricted to Certain Users.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647614.56/warc/CC-MAIN-20230601042457-20230601072457-00746.warc.gz
CC-MAIN-2023-23
2,364
23
https://www.netiq.com/documentation/appmanager-modules/AppManagerforLync_ReleaseNotes/data/AppManagerforLync_ReleaseNotes.html
code
AppManager for Microsoft Lync, formerly Office Communications Server, presently known as Microsoft Skype for Business, provides performance and availability management for the various servers that provide instant messaging, presence, conferencing, and other unified communications that your corporation uses.. This release improves usability and resolves previous issues. Many of these improvements were made in direct response to suggestions from our customers. We thank you for your time and valuable input. We hope you continue to help us ensure that our products meet all your needs. You can post feedback in the AppManager forum on NetIQ Communities, our online community that also includes product information, blogs, and links to helpful resources. The documentation for this product is available on the NetIQ website in HTML and PDF formats on a page that does not require you to log in. If you have suggestions for documentation improvements, click AppManager Modules page. To download this product, see the AppManager Module Upgrades & Trials website.to the right of each section of any page in the HTML version of the documentation posted at the This release of AppManager for Microsoft Lync resolves the following issues: Multiple Lync Knowledge Scripts have hard-coded event severity: Multiple Knowledge Scripts had hard-coded severity for some of the failure events. In this release, you can change those event severity based on your requirement. (ENG341038) Lync_SystemUsage reports false CPU usage. The Lync_SystemUsage Knowledge Script raised events that showed false CPU utilization for the multiprocessor servers. In this release, the issue is resolved. (ENG340253) Lync_ArchivedVoIPCallActivity and Lync_CallQuality jobs fail with arithmetic overflow error: The Lync_ArchivedVoIPCallActivity and Lync_CallQuality jobs failed with arithmetic overflow error. In this release, the error is resolved. (ENG341039 and ENG341040) Lync_SessionCallFailures sometimes report large number of failed sessions: The Lync_SessionCallFailures raised a large number of events for failed sessions. In this release, the issue is resolved. (ENG340950) Lync_CollectCallData jobs consume too much memory: The Lync_CollectCallData jobs consumed too much memory. In this release, the issue is resolved. (ENG342587) Pruning of Lync Supplemental Database fails without reattempting: The pruning of Lync Supplemental Database failed because of SQL timeout. In this release, the issue is resolved by reattempting pruning in the subsequent iterations. (ENG343625) To get the updates provided in this release, you must install the module to the AppManager repository (QDB) and on the agent computer, and then propagate changes to any running jobs for the Knowledge Scripts that were updated in this release. AppManager 7.x does not automatically update renamed or copied Knowledge Scripts. For more information, see the “Upgrading Knowledge Script Jobs” section in the management guide. This release of AppManager for Microsoft Lync replaces all Previous Releases. For the most recently updated list of supported application versions, see the AppManager Supported Products page. Unless noted otherwise, this module supports all updates, hotfixes, and service packs for the releases listed below. AppManager for Microsoft Lync has the following system requirements: NetIQ AppManager installed on the AppManager repository (QDB) computers, on the Lync computers you want to monitor (agents), and on all console computers 8.0.3, 8.2, 9.1, or later One of the following AppManager agents are required: Microsoft Windows operating system on the agent computers One of the following: Microsoft Lync on the agent computers 2010 or 2013 Microsoft Skype for Business on the agent computers Microsoft SQL Server for Lync Supplemental Database One of the following: Microsoft .NET Framework installed on the Lync trusted application server AppManager for Microsoft Windows module installed on the AppManager repository (QDB) computer, on the Lync computers you want to monitor (agents), and on all console computers 18.104.22.168 or later NOTE:This release of AppManager for Microsoft Lync does not support an upgrade from the AppManager for Microsoft Office Communications Server module. For more information on hardware requirements and supported operating systems and browsers, see the AppManager for Microsoft Lync Management Guide, included in the download package. Microsoft Lync includes a managed object, qNQLync.dll, and Knowledge Scripts to discover and monitor Microsoft Lync resources. The download package includes this release notes, a management guide, Help for Knowledge Scripts, and several files that are used during installation: AM70-Lync-22.214.171.124.msi, the module installer. AM70-Lync-126.96.36.199.ini, a configuration file used with the AppManager setup program. AM70-Lync-188.8.131.52.xml, a configuration file used for deploying the module with Control Center. This is the file you check into the Control Center Web Depot. AM70-Lync-184.108.40.206-RepositoryFiles.exe, a compressed file that contains the QDB and console files. You do not need to run this file during installation. ckLync.exe, the pre-installation check used with the AppManager setup program. When you download the module, these files are copied by default to the local folder on the download computer. Consider copying these files to the \windows_installation\setup\Setup Files folder on the same distribution computer on which you saved your main AppManager software and documentation. By doing so, you maintain all AppManager software in one location that is easily accessible when you want to add more repositories, management servers, or agents. Run the module installer to install the module components in the following locations: On the Lync computers you want to monitor (agents) to install the agent components On all console computers to install the Help and console extensions Run the module installer only once on each of these computers. You must also install the Knowledge Scripts. You can install these components into local or remote QDBs. When installing to the primary QDB, select the option to install Knowledge Scripts, and then specify the SQL Server name of the server hosting the QDB, as well as the case-sensitive QDB name. Important This release provides SQL stored procedures. To ensure module functionality, run the module installer for each QDB attached to Control Center. Install Knowledge Scripts only once per QDB. The module installer now installs Knowledge Scripts for each module directly into the QDB instead of to the \AppManager\qdb\kp folder as in previous releases. This module discovers and monitors only Lync components. If you plan to retain Office Communications Server (OCS) components on any of your servers, run AppManager for Microsoft OCS on those components. In addition, the AppManager for Microsoft OCS module does not support Microsoft Lync, and you should not use it to discover and monitor Lync components. For more information about installing this module, see the AppManager for Microsoft Lync Management Guide, included in the download package. NetIQ Corporation strives to ensure our products provide quality solutions for your enterprise software needs. The following issues are currently being researched. If you need further assistance with any issue, please contact Technical Support. Remote deployment fails on Microsoft Windows Server 2012 computers with versions of AppManager prior to 8.2: When you remotely deploy this module to a computer running Windows Server 2012 with any version of AppManager prior to 8.2, AppManager displays the following error: Unknown operating system detected for machine IPAddress. To work around this issue, install this module manually. (ENG319069) Lync_SyntheticTransaction Knowledge Script reports higher latency on first iteration. When you run Lync_SyntheticTransaction Knowledge Script for the first time, the latency increased abruptly. Ignore the latency in the first iteration. (ENG337132) This release of Microsoft Lync includes enhancements added in previous releases. This release provided the following enhancement: Support for monitoring Microsoft Skype for Business. This module now supports monitoring Lync Servers and Microsoft Skype for Business at the same time. This release resolved the following issue: Full discovery does not discover all the objects in the Tree View. This release resolves an issue where the Discovery_Lync Knowledge Script did not discover all the objects when you ran the discovery on a server where multiple sites were configured on a Front End Pool Server. With this release, the discovery Knowledge Script discovers all the objects even if you have configured multiple sites on a Front End Pool Server. (ENG338348) This release included the following set of new Knowledge Scripts: Lync_SyntheticTransaction: Monitors the health of the Lync deployment by executing the Lync synthetic transaction test against the Lync Front End pools. This Knowledge Script reports the test result and latency of the Lync synthetic transaction test, which helps in understanding the end-user experience. (ENG223456) Lync_ExtendedSyntheticTransaction: Monitors the health of the Lync deployment by executing the Lync extended synthetic transaction test against the Lync Front End pools. This Knowledge Script reports the test result and latency of the Lync extended synthetic transaction test, which helps in understanding the end-user experience. Lync_SetupSupplemetalDB: Creates a Lync supplemental database, including the tables and stored procedures needed to store call quality detail metrics of audio, video, and application sharing calls. Lync_CollectCallData: Polls Lync Quality of Experience (QoE) metrics databases for call quality metrics and saves the data to the Lync supplemental database. Lync_CallQuality: Monitors Lync call quality information stored in the Lync supplemental database for call quality statistics of audio, video, and application sharing calls. The statistics include round trip, jitter, packet loss, and Mean Opinion Score (MOS). This release also included the following enhancement: Updates to the Discovery_Lync Knowledge Script: With this release, the Discovery_Lync Knowledge Script includes an option to create a supplemental database, which is used by Lync_CollectCallData to collect call quality data. This release provided the following enhancement: Support for monitoring Microsoft Lync 2013. This module now supports monitoring Microsoft Lync 2013 and Lync 2010 servers at the same time. This release provided the following enhancement: Support for Microsoft Windows Server 2012 Support for Microsoft Lync 2010 Monitors the health of all services running on Lync servers Monitors the availability of Lync servers Monitors the total CPU usage of a server using Lync Monitors the VoIP call metrics contained in the Monitoring (CDR) database Monitors the current call activity and call failure metrics of Edge Servers and Mediation Servers Tracks server uptime since last reboot Tracks the number and duration of all sessions occurring on Lync servers Tracks the load placed on servers by ongoing conferences and sessions Tracks any failed conferences or sessions Our goal is to provide documentation that meets your needs. If you have suggestions for improvements, please email [email protected]. We value your input and look forward to hearing from you. For detailed contact information, see the Support Contact Information website. For general corporate and product information, see the NetIQ Corporate website. For interactive conversations with your peers and NetIQ experts, become an active member of our community. The NetIQ online community provides product information, useful links to helpful resources, blogs, and social media channels. For information about NetIQ legal notices, trademarks, disclaimers, warranties, export and other use restrictions, U.S. Government restricted rights, patent policy, and FIPS compliance, see https://www.netiq.com/company/legal/. Copyright (C) 2017 NetIQ Corporation. All rights reserved.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00761.warc.gz
CC-MAIN-2022-33
12,180
81
https://www.polytechforum.com/mech/oil-and-gas-reserves-map-of-the-north-sea-2419-.htm
code
I'm looking for a large map (to wall mount) detailing the oil and gas fields and relevent infrastrucure. (UK map showing the terminals / interconnector / fields) All I've found are ones that cost about £160. I'm not bothered about seeing all the allocated sectors - just a basic over view map will do. Does anyone know of a source??
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649343.34/warc/CC-MAIN-20230603201228-20230603231228-00626.warc.gz
CC-MAIN-2023-23
333
2
https://www.frosoif.org/best-option-for-hosting-your-own-website/
code
Best Option For Hosting Your Own Website Finding a top quality economical web hosting carrier isn’t very easy. Every site will have various demands from a host. And also, you need to contrast all the features of a hosting company, all while trying to find the very best offer feasible. This can be a great deal to sort through, especially if this is your very first time purchasing holding, or constructing a website. The majority of hosts will provide very inexpensive initial rates, only to raise those rates 2 or 3 times higher once your preliminary contact is up. Some hosts will offer free bonus offers when you sign up, such as a totally free domain name, or a complimentary SSL certification. While some hosts will be able to supply much better performance as well as high degrees of protection. Best Option For Hosting Your Own Website Listed below we dive deep into the very best inexpensive host plans out there. You’ll learn what core holding attributes are essential in a host as well as exactly how to analyze your very own organizing needs so that you can pick from among the very best economical hosting carriers listed below. Disclosure: When you purchase a host package with links on this web page, we earn some compensation. This assists us to maintain this website running. There are no additional costs to you at all by using our web links. The list below is of the very best inexpensive webhosting plans that I’ve personally made use of and also tested. What We Consider To Be Inexpensive Webhosting When we describe a webhosting package as being “Inexpensive” or “Budget plan” what we suggest is hosting that falls under the cost brace in between $0.80 to $4 monthly. Whilst looking into affordable hosting carriers for this guide, we looked at over 100 different hosts that fell under that cost array. We then evaluated the quality of their most inexpensive holding plan, worth for money and customer support. In this short article, I’ll be going over this world-class website holding firm and stick in as much pertinent information as possible. I’ll discuss the functions, the pricing options, and also anything else I can think of that I think could be of advantage, if you’re making a decision to sign up to Bluhost as well as obtain your internet sites up and running. So without further trouble, allow’s check it out. Bluehost is just one of the most significant webhosting companies worldwide, getting both large advertising support from the company itself and affiliate marketers that advertise it. It truly is a huge company, that has actually been around for a long time, has a large credibility, and is most definitely among the leading choices when it pertains to web hosting (most definitely within the top 3, at least in my publication). However what is it specifically, and should you get its services? Today, I will address all there is you need to understand, given that you are a blog writer or a business owner who is looking for a host, as well as does not understand where to get going, considering that it’s a terrific remedy for that audience generally. Let’s imagine, you want to hold your sites and also make them visible. Okay? You already have your domain (which is your website location or LINK) today you intend to “turn the lights on”. Best Option For Hosting Your Own Website You require some holding… To achieve all of this, and also to make your internet site noticeable, you require what is called a “server”. A server is a black box, or gadget, that saves all your internet site information (files such as pictures, messages, videos, links, plugins, and various other info). Currently, this server, has to be on constantly as well as it needs to be connected to the internet 100% of the time (I’ll be pointing out something called “downtime” later on). Furthermore, it likewise requires (without obtaining too elegant and also right into information) a file transfer protocol commonly called FTP, so it can reveal web browsers your site in its desired form. All these points are either costly, or need a high degree of technological ability (or both), to develop as well as preserve. And you can entirely head out there as well as learn these points by yourself and established them up … but what concerning as opposed to you buying as well as preserving one … why not simply “renting holding” rather? “This is where Bluehost comes in. You lease their web servers (called Shared Hosting) and also you introduce a site utilizing those servers.” Given that Bluehost keeps all your documents, the business also enables you to set up your material administration systems (CMS, for brief) such as WordPress for you. WordPress is an extremely preferred CMS … so it simply makes good sense to have that choice readily available (almost every organizing firm now has this choice as well). In short, you no more need to set-up a server and afterwards incorporate a software where you can develop your material, separately. It is already rolled right into one bundle. Well … envision if your web server remains in your home. If anything were to happen to it at all, all your files are gone. If something fails with its internal procedures, you require a specialist to repair it. If something overheats, or breaks down or obtains damaged … that’s no good! Bluehost takes all these headaches away, and cares for every little thing technological: Pay your server “rental fee”, and also they will certainly care for whatever. As well as as soon as you purchase the solution, you can then begin concentrating on including material to your site, or you can put your effort into your marketing projects. What Solutions Do You Get From Bluehost? Bluehost provides a myriad of different services, yet the primary one is hosting certainly. The holding itself, is of various kinds by the way. You can lease a shared server, have a dedicated web server, or also an online exclusive server. For the objective of this Bluehost review, we will certainly concentrate on holding services and other services, that a blog owner or an online business owner would require, rather than go unfathomable right into the rabbit hole as well as talk about the other solutions, that are targeted at even more skilled people. - WordPress, WordPress PRO, and also e-Commerce— these organizing services are the bundles that permit you to organize an internet site making use of WordPress as well as WooCommerce (the latter of which enables you to do e-commerce). After acquiring any one of these plans, you can start building your site with WordPress as your CMS. - Domain name Industry— you can also purchase your domain from Bluehost as opposed to other domain name registrars. Doing so will certainly make it less complicated to aim your domain to your host’s name servers, considering that you’re making use of the same marketplace. - Email— once you have bought your domain, it makes good sense to also get an email address tied to it. As a blog writer or online business owner, you must practically never utilize a totally free e-mail service, like Yahoo! or Gmail. An e-mail such as this makes you look amateur. Luckily, Bluehost provides you one completely free with your domain name. Bluehost likewise uses specialized servers. As well as you may be asking …” What is a committed web server anyway?”. Well, the important things is, the fundamental host packages of Bluehost can only a lot web traffic for your internet site, after which you’ll need to upgrade your organizing. The reason being is that the common servers, are shared. What this implies is that a person web server can be servicing two or even more sites, at the same time, one of which can be yours. What Does All This Mean For You? It means that the single server’s resources are shared, and also it is doing multiple tasks at any offered time. Once your internet site starts to hit 100,000 site sees monthly, you are going to require a committed web server which you can also receive from Bluehost for a minimum of $79.99 per month. This is not something yous ought to worry about when you’re starting out however you should maintain it in mind for certain. Bluehost Pricing: How Much Does It Price? In this Bluehost review, I’ll be focusing my focus mainly on the Bluehost WordPress Hosting bundles, because it’s the most popular one, and also very likely the one that you’re seeking and that will certainly match you the very best (unless you’re a significant brand name, business or website). The 3 readily available strategies, are as adheres to: - Basic Plan– $2.95 monthly/ $7.99 normal cost - Plus Strategy– $5.45 per month/ $10.99 routine rate - Choice And Also Plan– $5.45 each month/ $14.99 regular cost The first rate you see is the price you pay upon subscribe, as well as the second cost is what the price is, after the first year of being with the company. So basically, Bluehost is mosting likely to charge you on a yearly basis. And also you can additionally pick the amount of years you wish to organize your site on them with. Best Option For Hosting Your Own Website If you select the Fundamental plan, you will certainly pay $2.95 x 12 = $35.40 starting today and by the time you enter your 13th month, you will currently pay $7.99 per month, which is additionally billed annually. If that makes any type of sense. If you are serious about your site, you should 100% obtain the three-year option. This suggests that for the fundamental plan, you will pay $2.95 x 36 months = $106.2. By the time you hit your fourth year, that is the only time you will certainly pay $7.99 monthly. If you consider it, this technique will certainly save you $120 throughout three years. It’s very little, however it’s still something. If you intend to obtain greater than one website (which I extremely suggest, as well as if you’re major, you’ll most likely be obtaining more eventually in time) you’ll intend to utilize the selection plus plan. It’ll allow you to host unlimited sites. What Does Each Plan Deal? So, in the case of WordPress holding strategies (which are similar to the shared hosting plans, however are a lot more geared towards WordPress, which is what we’ll be concentrating on) the features are as follows: For the Fundamental strategy, you get: - One site only - Guaranteed internet site using SSL certificate - Maximum of 50GB of storage space - Cost-free domain for a year - $ 200 advertising credit history Bear in mind that the domains are bought independently from the holding. You can obtain a complimentary domain with Bluehost below. For both the Bluehost Plus hosting and Choice Plus, you get the following: - Unrestricted variety of websites - Free SSL Certificate. Best Option For Hosting Your Own Website - No storage or transmission capacity restriction - Complimentary domain for one year - $ 200 advertising and marketing credit history - 1 Office 365 Mail box that is cost-free for one month The Choice Plus strategy has actually an included advantage of Code Guard Basic Back-up, a back-up system where your data is saved and also duplicated. If any accident takes place and your web site information goes away, you can recover it to its original kind with this attribute. Notice that although both plans set you back the very same, the Choice Plan then defaults to $14.99 per month, normal rate, after the collection quantity of years you’ve chosen. What Are The Perks Of Using Bluehost So, why select Bluehost over other webhosting solutions? There are numerous webhosting, a number of which are resellers, yet Bluehost is one pick couple of that have stood the test of time, as well as it’s probably the most well known available (and also for good reasons). Here are the three primary advantages of selecting Bluehost as your web hosting company: - Web server uptime— your web site will not show up if your host is down; Bluehost has more than 99% uptime. This is very vital when it comes to Google SEO and rankings. The greater the far better. - Bluehost rate— how your server reaction identifies exactly how rapid your website shows on an internet browser; Bluehost is lighting fast, which indicates you will certainly minimize your bounce rate. Albeit not the most effective when it pertains to loading rate it’s still widely important to have a quick rate, to make customer experience much better and also better your position. - Unrestricted storage— if you get the Plus plan, you need not stress over the number of files you store such as video clips– your storage capability is unrestricted. This is really vital, because you’ll possibly encounter some storage issues later on down the tracks, and you don’t desire this to be a trouble … ever. Last but not least, client assistance is 24/7, which suggests regardless of where you are in the globe, you can speak to the support team to fix your web site issues. Pretty common nowadays, yet we’re taking this for given … it’s likewise extremely essential. Best Option For Hosting Your Own Website Also, if you’ve gotten a cost-free domain with them, then there will certainly be a $15.99 charge that will certainly be subtracted from the amount you originally bought (I picture this is due to the fact that it kind of takes the “domain out of the marketplace”, uncertain about this, however there most likely is a hard-cost for registering it). Last but not least, any type of demands after thirty day for a refund … are void (although in all honesty … they should possibly be stringent below). So as you see, this isn’t always a “no doubt asked” policy, like with some of the various other hosting options around, so make sure you’re fine with the plans prior to proceeding with the holding.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00651.warc.gz
CC-MAIN-2023-06
13,846
81
https://technical.ly/philly/2014/05/19/philadelphia-school-district-budget-visualization/
code
The pair built the visualization, which shows the District’s 2014 budget and proposed 2015 budget, at this month’s EdTech Hackathon with budget data the District released ahead of the event. District representatives said they hoped technologists would use the data to build tools that could increase transparency by making the budget easier to understand. Alfano, part of the Jarvus dev firm, is the co-captain of Code for Philly, the local Code for America civic hacking group, and Ancona is the assistant director of the Corzo Center at University of the Arts. The project is beautiful but needs lots of context — like how debt financing compares with other big city school districts or what the $188 million going toward ‘secondary education’ means exactly. For those with some understanding of big budget institutions, it’ll be fascinating.-30- This month in Technical.ly history: The evolution of OpenDataPhilly What is open science? With backing from Capital One, this startup is working on an edtech platform for refugees Why Linode sent this manager to Mumbai for 7 months AT&T’s Aspire Accelerator is looking for disruptive edtech orgs Alex’s Lemonade Stand Foundation gets $100K to ease access to childhood cancer data Students at West Chester U will join a global hackathon backed by Microsoft and Github This financial services firm offers global opportunity in the heart of Philly Sign-up for daily news updates from Technical.ly Philadelphia
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526359.16/warc/CC-MAIN-20190719202605-20190719224605-00090.warc.gz
CC-MAIN-2019-30
1,471
12
https://www.programmableweb.com/sdk/relayr-python-sdk-relayr
code
The Relayr Python SDK by Relayr offers an Internet of things platform that can be integrated with existing applications. It features a cloud solution for receiving and saving data from sensors. The current version is 0.3. AWS has launched CloudTrail: a resource tracking and user visibility API. CloudTrail records API calls made to AWS accounts and returns log files. CloudTrail records calls across AWS services. Whether you need to track calls made to the AWS Management Console, AWS SDKs, command line tools, Cloud-Formation, or other AWS services; CloudTrail's call history helps increase security analysis, resource change analysis, and compliance. Yieldex, the online ad inventory forecasting and optimization start-up, has won the second annual Amazon Web Services Start-Up Challenge. As grand prize winner, Yieldex receives US $50,000 in cash and $50,000 in AWS service credits, along with the possibility of an investment offer from Amazon. "Visibility Delivered" is Yieldex's motto; by winning perhaps the biggest web services start-up competition, they've certainly delivered visibility for themselves. The Nexmo SMS API is a service that allows you to send and receive SMS messages anywhere in the world. Nexmo provides REST APIs, but it's easier to use their official Java SDK. This tutorial covers how to send and receive SMS messages with the SDK so that you can use them in your applications.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948617816.91/warc/CC-MAIN-20171218141805-20171218163805-00672.warc.gz
CC-MAIN-2017-51
1,409
4
http://querytreeapp.com/blog/four-bad-reasons-for-building-monstrous-reports/
code
Occasionally, I get asked to create a report that looks a little bit like this: Or even worse, a report that looks like this: This is always a bad idea because no human being can reasonably makes sense of that information in one go; and reports exist principally for human beings to be to look at and make informed decisions. If looking at a report leaves you wondering “what does this mean”, then the report is badly designed. It is costing your organisation money every time someone looks at it and wastes an hour pondering whether they should do something because of it. If you dig into why you’re being asked to produce one of these monstrous reports you’ll usually get to a better outcome than if you just implemented it the way you were told. Asking the awkward questions can be a hard though, here are some of the things I've heard more than once when I started digging: - “This is the Information that Requested” In my experience, this is the most common cause of monstrous reports. Some other department elsewhere within the organisation has requested information, and just to be on the safe side, they've asked for more than they need. Over time, the interaction between these two departments has been formalised into the production of “the report” and if ever a new piece of information is needed, someone in the receiving department requests that it be added to “the report”. If it’s possible to speak to the department or group that receives this report, you’ll either find someone who applies a series of checks, adding up and comparing to other data sources, or someone who copies, pastes and transposes the useful bits of your report into their report, before they send it on to it’s ultimate destination. Which will leave you to wonder, why not just tell some piece of software to do the necessary checks and raise an alert if they fail, or output that final report in the first place? - “The Information is Entered into ” Occasionally, a report is actually used as a sort of interface between two IT systems. Someone faithfully inputs the figures in the report from system A, into system B. Despite that being error prone, slow, expensive and something that computers are really really good at. Sometimes you even find that both sides have built software to generate/upload the report at either end of the process, and yet the communication technology being used is still a PDF/Excel file sent over email. What’s actually required here is an automated interface between the two systems. But getting this done may require a lot of cross departmental organisation, and the introduction of an inter-dependency between two separate IT projects - something a lot of project managers will resist doing. - “ Looks at This Information Every Month and Decides What to Do” My personal favourite. “Person X” tends to be quite an important person within the organisation. Maybe the Financial Director, or a Programme Manager. They haven’t so far been able to articulate what they’re looking for, so rather than glancing at a dashboard full of green blobs indicating everything’s OK, they've asked to see a raft of charts and numbers every month so they can look for things that seem “unusual” or “out of the ordinary”. Sometimes it’s possible to sit down with Person X and work with them to pinpoint what it is they look for in all the numbers and what “ordinary” means. If you’re in a position to do that, it’s definitely worth doing, because the end result will be a clearer and more efficient report, and probably less work for everyone. - “The Government/Regulator Requires the Information in that Format” Basically, you’re screwed. Just implement the report as specified and move on. Although this is most likely a case of either reason #1 or #2, you have zero chance of influencing a governmental body and explaining to them that their report is a monster, and frankly, it’s not in your employer's interests for you to do so either. It might be in the government body's interest to have a clearer report, but seeing as they face no commercial pressure, and you are forced to work with them, you might as well just do what they tell you and get on with your life. Reports that swamp you with information are never a good idea. They are costly to build, slow to run and drain people’s time whenever they’re looked at. Whenever possibly, I try to avoid building them, because doing so is not serving the customer - even if it’s explicitly what the customer has asked for.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703550617.50/warc/CC-MAIN-20210124173052-20210124203052-00314.warc.gz
CC-MAIN-2021-04
4,558
13
https://www.aftercollege.com/company/bae-systems/240124/58163251/?refererPath=explore
code
BAE Systems is seeking an entry-level Software Engineer. Will design and develop software platforms and mission applications for intelligence and defense customers with domain emphasis on Geospatial Intelligence, Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR), and Mission Management. Will work in a rapid development environment as part of a multi-disciplinary team. Tasks will include software coding, debugging, and integrating. Tasks may include integration, regression, and requirement testing of complex SW/HW systems. May travel occasionally to customer locations to support product demonstrations, software installation, and system testing. Required Education: Bachelor's degree and 0+ year(s) related experience Bachelor's degree in an Engineering or Scientific field or equivalent experience. Employees in California must have a Bachelor's degree in the applicable field, or an applicable certification, equivalent certificate or any required license. Equivalent experience will not be considered. - GPA 3.0 - College transcripts at time of interview (unofficial copy is acceptable). - Proficiency in multiple high-level programming languages (C++, Java, C#). - Understanding and skills in software design and code, including Object Oriented Analysis & Design (OOAD). - Team player with a proactive attitude and ability to be productive in a dynamic environment. - Ability to work in a collaborative environment (open seating arrangement). - Effective verbal and written communications skills. - BS degree in Computer Science. - Software related internship, work, or hobby experience (software development, IA, web development). - Operating Systems (UNIX/Linux and/or Windows and .Net, Android). - Development methodologies (Waterfall, Agile, Iterative). - Architecture (Web Services and SOA). - Development languages (C++, Java/J2EE, C#). - Database tools and design (Oracle, Postgres, SQL, MongoDB, AllegroGraph, NoSQL, RDF, SPARQL). - Development tools and services (MS Visual Studio, Eclipse, JBuilder, Spring Framework, JBoss, Hibernate, and automated test tools). - User Interface development tools. - Configuration Management tools (ClearCase, Subversion, Git). - Algorithm development. - Basic understanding of Software Security. - Open Source, cloud and virtualization software and services. BAE Systems is a premier global defense and security company with approximately 100,000 employees delivering a full range of products and services for air, land and naval forces, as well as advanced electronics, security, information technology solutions and customer support and services. Information Solutions, based in Reston, Virginia, is among the 10 largest IT providers to the U.S. government, serving most of the federal defense and civilian marketplace. It provides network-centric command, control, computing, and intelligence (C3I) solutions; wideband networking radio systems; information systems for the U.S. intelligence community; geospatial information services; and information technology services. Leveraging its knowledge of signals and data derived from signals, Information Solutions has attained a market-leading position in advanced information technology research, intelligence analysis and production, and geospatial exploitation software. People are the greatest asset in any Company ... BAE Systems is committed to a high performance culture and provides an environment that challenges our employees to be remarkable and obtain their full potential. Equal Opportunity Employer. Females. Minorities. Veterans. Disabled
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122041.70/warc/CC-MAIN-20170423031202-00573-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
3,607
28
http://bolobuild.blogspot.com/2011/
code
Monday, March 21, 2011 HAC-US radios are a pain. Need to try just transmitting something since the setting mode isn't working right. The settings for the Arduino and the radios for data are different--serial data usually has a "stop" or "start" bit, which tells the machine the end of the byte. Means changing the arduino's settings to try to get it working.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863886.72/warc/CC-MAIN-20180620202232-20180620222232-00570.warc.gz
CC-MAIN-2018-26
358
2
https://community.imgtec.com/forums/topic/crash-with-debug-context-enabled/
code
- May 11, 2016 at 9:49 pm #53685 In PowerVR SDK R1.1’s PVRVFrame (win32) with KHR_debug support advertised, if an EGL debug context is enabled, CPU memory just climbs, and eventually the app crashes in VFrame’s libEGL.dll with a std::bad_alloc error. We’re seeing this on systems with ATI GPUs and NVidia GPUs. Note that this is without PVRTrace recorder libs plugged in. Also note that the KHR_debug APIs we use are glDebugMessageInsert (extensively), glObjectLabel (minimal), and glDebugMessageCallback. First-chance exception at 0x762BC54F in myApp.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x002F171C. Debug Error! Program: …\myApp.exe R6010 – abort() has been called > msvcr110d.dll!_NMSG_WRITE(int rterrnum) Line 226 C msvcr110d.dll!abort() Line 62 C msvcr110d.dll!terminate() Line 97 C++ msvcr110d.dll!__FrameUnwindFilter(_EXCEPTION_POINTERS * pExPtrs) Line 980 C++ msvcr110d.dll!__FrameUnwindToState(EHRegistrationNode * pRN, void * pDC, const _s_FuncInfo * pFuncInfo, int targetState) Line 1068 C++ msvcr110d.dll!@_EH4_CallFilterFunc@8() Line 394 Unknown ntdll.dll!777134c9() Unknown [Frames below may be incorrect and/or missing, no symbols loaded for ntdll.dll] [External Code] libEGL.dll!0038254b() Unknown [External Code] libEGL.dll!0038254b() Unknown libEGL.dll!0036197e() Unknown libEGL.dll!00361156() Unknown libEGL.dll!0035d4e0() Unknown libEGL.dll!0035d454() UnknownMay 12, 2016 at 3:31 pm #53698 I think I’ve found the cause of the memory growth. When debug messages are inserted the emulator is pushing them onto an internal queue to be presented in the GUI, but if the GUI isn’t running the queue just fills up with messages which are never processed. So a somewhat awkward workaround would be to leave PVRVFrameGUI running so that it empties out the message queue. I will get this bug fixed for the next release.May 14, 2016 at 1:59 am #53706 Ok, thanks Chris. That’s an easy work-around, or just disable the debug context except when capturing a trace. The only added benefit I’m aware of for running with a debug context w/o trace capture active is that you can get callbacks for GL errors rather than have to trap them manually (e.g. add glGetError after every GL call).
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886792.7/warc/CC-MAIN-20180117003801-20180117023801-00450.warc.gz
CC-MAIN-2018-05
2,235
8
https://www.my.freelancer.com/projects/php/orkut-clone.83388/
code
I want an exact clone of Orkut. I would like to see demo. I want the script to come with an installer. I'll cancel this project if i'm not satisfied. 7 pekerja bebas membida secara purata $73 untuk pekerjaan ini i hv the script. but for the demo i hv to install it. you can trust me. i can install this script of your server. you can make the payment after you happy with the work? is it ok with you? we have expertise in cloning such sites and we are working on similar stuff ...we are confident of providing robust site professionally done....please check PMB
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864943.28/warc/CC-MAIN-20180623054721-20180623074721-00540.warc.gz
CC-MAIN-2018-26
561
6
http://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15211/spring.95/www/silly-example.html
code
Good! You have followed the indicated link, and are now in a new document. Perhaps the most important thing you need to know is how to go ``back'' to where you were. The ``back'' command in Mosaic is a button at the bottom of the screen labelled ``Back'' (or an arrow pointing to the left if you're using a Mac). Click on the back button now to return to the main [email protected]
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257361.12/warc/CC-MAIN-20190523184048-20190523210048-00124.warc.gz
CC-MAIN-2019-22
381
2
https://engineeringbreakdown.com/tag/beam/
code
As I promised a few weeks ago, I’m back with a tutorial! In this occasion, a cantilever beam will be modelled in Abaqus/Standard. What is more, the importance of defining a good mesh (not only the element size matters!) will be illustrated with several examples. So, first things first. A cantilever beam is a structure which has one of its ends fully constrained. This means that all degrees of freedom are restricted. An example is presented in the following figure: A similar case as the one presented above will be modelled using the Finite Element package Abaqus/Standard. The structure can be created using different types of elements: beam, bar, solid or shell. I have decided to model it using shell elements, since it will allow me to show the influence of the element size and type in a very simple way. In order to create the component we need to be in the Part module. We should select the 3D deformable shell (planar) option. Then we will be able to create the geometry. In this case a simple rectangle of 20mm x 100mm will be enough. Please bear in mind that Finite Element codes are unitless and all the parameters need to be defined in a consistent system of units. I have decided to use mm, GPa and kN.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655889877.72/warc/CC-MAIN-20200705215728-20200706005728-00277.warc.gz
CC-MAIN-2020-29
1,221
4
https://therootselaargroup.com/en/tankbouw-rootselaar/tankbouw-rootselaar/engineering/solid-modelling-and-drafting/
code
Tankbouw Rootselaar uses 3D design modelling. It greatly improves design quality because it is a more complete process than 2D design. As a result, many human errors that can occur with traditional 2D design methods are avoided. In the past problems such as component collisions, incorrect quantities or parts that do not fit, could happen because a designer working in only 2D was forced to memorise much of the information. It is this point that gives rise to errors because the brain cannot visualize too a exact scale. Reducing human error by using the 3D modelling design methods shown in our 3D Autocad course minimizes the need for re-work because the design quality is greatly improved. Using 3D design modelling to get quantity data is easy because items are represented as they occur. Consequently, as long as a CAD 3D design is created as a true to life model, the 3D modelling design represents quantities with exact accuracy. Using 3D design modelling is also useful for customer presentations, brochures, manufacturing, and technical publications. Clearer communication of design intent at the earliest stage is always useful. We can also provide 3D models as step files so that implementation in other 3D software programs is possible.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00508.warc.gz
CC-MAIN-2023-06
1,250
4
http://playboard.me/android/apps/com.tuyware.mydisneyinfinitycollection
code
Track what you own from Disney Infinity in a beautiful, fast & easy way. My Disney Infinity Collection is powered by http://www.DisneyInfinityFans.com and as a result contains all current data for all characters, play sets, power discs & adventures. Allows tracking of: * Owned Disney Infinity Characters (supports multi select to mark as 'owned' in 1 action) * Owned Disney Infinity Power Discs (supports multi select to mark as 'owned' in 1 action) * Owned Disney Infinity Play sets (supports multi select to mark as 'owned' in 1 action) * Completed Adventures * Completed Character Adventures * Completed Golden Missions * Character: Grid & List * Power discs: Grid & List Different sorting options: * Adventure: by name, by type & name * Character: by name, by owned & name, by play set & name * Play set: by name, by owned & name * Power disc: by name, by owned & name, by type & name Provides you with various information: * What combinations of certain Power Discs have special results * Description of the different Adventures * Goals of the different adventures * More to come Backup & Restore I have created this application because we own a lot of Disney Infinity stuff, and it became hard to track what we already owned. This also means that I will be supporting this app since me & my wife will be making regular use of it. My Disney Infinity Collection is completely free, with ads. A pro key is available for those that want to support me, and give me some extra motivation :) I'd love to hear your ideas and feedback, so don't hesitate to email me!
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011210359/warc/CC-MAIN-20140305092010-00019-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
1,564
24
https://forums.warpportal.com/index.php?/topic/236339-player-thank-giving-community-game-treasure-explore-endclose/
code
Hallo everyone, i am FlurryChin in Iro. I am organize a Thank-Giving Community Game for every people who know this game. So feel free to join the Game Why i am doing this? Because i sold my Toxic Enhancement Chimera Mvp card at 34B. So the prize for this game is 500M. Game Status :End!! Time: 18May2018 8pm- 11pm GMT+8 1st Prize winner is FatalLuan congratz 2nd Prize Winner is Ezioxa Congratz 3rd Prize Winner is Merch Congratz Answer For the Quiz is +60 The Password enter the Private room is +60Call The Place i am hiding is Malaya Port (356,327) It is recommend player Use Google to search the answer. Edited by Newbi001, 18 May 2018 - 04:08 AM.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887377.70/warc/CC-MAIN-20200705152852-20200705182852-00032.warc.gz
CC-MAIN-2020-29
650
12
https://www.artima.com/forums/flat.jsp?forum=106&thread=233278
code
During last week's The Server Side Java Symposium in Prague, it looks like scaling is the biggest issue. I've collected some pointers from presentations on the topic and will certainly return to it when we start doing this sort of stuff. Prague is a beautiful city and forms an excellent conference location. This year, The Server Side Java Symposium Europe is held here (I am writing from the conference hotel) and it seems that there are two topics that keep the minds of Java developers busy: web frameworks and scaling. As I have a natural inclination towards the latter, here are some quick notes on the topic. Scaling is something that every Java developer is going to have to deal with as hardware is converting to multi-cores. Two is the norm today, but we were told to expect a doubling of cores every 18 months or so (yup, Moore's law still seems to hold), meaning that a measly notebook will likely have 8 cores in three years, and servers will typically ship with 32 cores. In order to use that horsepower, software needs to change to parallelism which means that every Java developer has to read up on multi-threading, synchronization (and why you should avoid both), java.util.concurrent (and why you want it), etcetera. However, it doesn't stop at a single machine. Prominently present in the vendor area were GigaSpaces and Terracotta, and some other products regularly appeared on speakers' slides as well: Gemfire from Gemstone and Oracle's Coherence, for example. All of these products promise linear scaling of application performance to tens or hundreds of machines by distributing data and trying to co-locate data throughout the computer cluster with processing. GigaSpaces deals with it from mostly a Jini/Javaspaces perspective, Terracotta essentially gives you a distributed JVM, Gemfire is at its core an OODB, and Coherence is a distributed JCache implementation - different pedigrees, same goal: remove disk I/O from the performance equation. John Davies had a great presentation on that subject. His customers, mostly banks, simply can't scale their databases to the load they are getting in some systems, so they essentially do without it. I liked his comparison where today we think about disk for "on-line" access and tape for backup, and we are moving to memory for "on-line" access and disk for backup. It makes sense: if you have your data distributed across hundreds of machines in various locations, chances that you will lose in-memory data are negligible and for a lot of applications it makes sense to sync stuff asynchronously to disk "just in case", but not bother doing it inside the transaction. Any of the products I mentioned will facilitate this model, and I hope to get back to this topic as we evaluate most of them for our own use (we have outgrown the size where doing the regular LAMP thing makes sense a while ago, so this looks to be our bright future). As I said, just some pointers at the moment. The congress is still in progress and did I mention that Prague has a lot of extremely nice bars? Yup, I'm a bit hung-over so this is all I'm able to write down today :-) By the way, thanks for Bill Venners for gracefully allowing me to blabber about this stuff on his site. I hope you will enjoy my postings - feel free to correct me if I'm wrong, I have a thick enough skin to handle your criticism. Nice writeup Cees. Just a small comment about GigaSpaces - though we did initially start as a JavaSpaces implementation, the product today has grown beyond that. It's very much aligned with the Spring way of doing things, i.e.non-intrusiveness, annotation/xml based configuration, smart defaults, dependency injection, etc. Jini is there under the hood as a discovery mechanism and is not exposed to the user. The following link elaborates on the GigaSpaces programming model and Spring integration (we call it OpenSpaces): http://www.gigaspaces.com/wiki/display/OLH/OpenSpaces+Overview > Coherence is a distributed JCache implementation Coherence was built as a peer-to-peer, clustered, in-memory data management system. The JCache API work was largely based on the Coherence API because of the popularity of using Coherence to cache data in clustered (scale-out) J2EE applications. > All of these products promise linear scaling of application > performance to tens or hundreds of machines by distributing > data and trying to co-locate data throughout the computer > cluster with processing. Linear scale is not possible for every use case, but for the ones that are possible, Coherence does deliver linear scalability. Scalability (via partitioning) isn't actually the hard part -- it's achieving scalability in a dynamic system that maintains availability and information reliability as new servers come online or existing servers fail.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00126.warc.gz
CC-MAIN-2022-27
4,797
13
https://balhafni.github.io/EMNLP_2022/blog/startups-in-nlp/
code
Name: Startups in NLP, a panel in the Industry Track of NAACL HLT 2021 Time: Monday June 7 at 1pm PDT, 4pm EDT. Duration: 60 minutes Questions will be taken from the audience Alon Lavie is the VP of Language Technologies at Unbabel, a leading translation technology and services company, and a Consulting (adjunct) Professor at the Language Technologies Institute at Carnegie Mellon University. At Unbabel, Alon leads and manages the company’s US AI lab based in Pittsburgh and provides strategic AI leadership company-wide. Prior to joining Unbabel, Alon was a senior manager at Amazon, where he led and managed the Amazon Machine Translation R&D group in Pittsburgh. Prior to that, Alon co-founded “Safaba Translation Solutions”, a Machine Translation technology start-up company, where he was Chairman of the Board, President and CTO. Safaba was acquired by Amazon in 2015. From 1995 to 2015, Alon was a Research Professor at the Language Technologies Institute at Carnegie Mellon University. His main research interests and activities focus on Machine Translation evaluation (the METEOR and COMET metrics), translation Quality Estimation, Machine Translation adaptation approaches with and without human feedback, and methods for multi-engine MT system combination. Alon served as the President of the International Association for Machine Translation (IAMT) (2013-2015). Prior to that, he was president of the Association for Machine Translation in the Americas (AMTA) (2008-2012), and was General Chair of the AMTA 2010 and 2012 conferences. He is a member of the Association for Computational Linguistics (ACL), where he was president of SIGParse - ACL’s special interest group on parsing (2008-2013). Apoorv Agarwal is the Chief Executive Officer and founder of Text IQ, a leading AI company whose products are used by some of the world’s largest enterprises and organizations to identify sensitive information hiding in their unstructured data. Text IQ’s AI is used by its customers to identify unconscious bias within their organization, protect privileged and confidential information, and elevate their privacy programs. In May this year, Apoorv secured a successful exit by selling Text IQ to Relativity, a $3.6 billion unicorn, which is the legal and compliance platform of choice for the vast majority of Fortune 500 companies and top law firms. By joining hands with Relativity, Apoorv and his team at Text IQ intend to realize their shared vision of bringing the benefits of AI to the world’s largest companies so they can finally discover the truth in their data. With a natural curiosity towards finding ways to use AI to solve organizational and business problems, Apoorv has dedicated a significant part of his early business and academic career to research and development. During his PhD in Computer Science at Columbia University, he developed the socio-linguistic hypergraph (SLH), which eventually became the core technology behind Text IQ’s products—the same technology that is now being used by some of the world’s most prominent and heavily regulated companies, including 5 out of the top 10 life sciences companies and 4 out of the top 10 banks. Before Text IQ, Apoorv was a member of the IBM Watson core team, where he developed two patents and received the prestigious IBM Ph.D. Fellowship award. Spence Green is the CEO of Lilt, which he co-founded with John DeNero. He graduated from Stanford University with a PhD in computer science under the direction of Chris Manning and Jeff Heer. His work is on machine translation and mixed-initiative systems. Nasrin Mostafazadeh is Co-founder of Verneek, a new AI startup that is striving to enable anyone to make data-informed decisions without needing to have any technical background. Before Verneek, Nasrin held senior research positions at AI startups BenevolentAI and Elemental Cognition and was earlier at Microsoft Research and Google. She received her PhD at the University of Rochester working at the conversational interaction and dialogue research group, with her PhD work focused on commonsense reasoning through the lens of story understanding. She has started lines of research that push AI toward deeper understanding and showing basic common sense, with applications ranging from storytelling to vision & language. She has been a keynote speaker, chair, organizer, and program committee member at different AI venues. Nasrin was named to Forbes’ 30 Under 30 in Science 2019 for her work in AI. Kieran Snyder is the CEO and Co-Founder of Textio, the augmented writing platform. For anything you write, Textio tells you ahead of time who’s going to respond based on the language you’ve used. Textio’s augmented writing engine is designed to attach to any large text corpus with outcomes to find the patterns that work. Prior to founding Textio, Kieran held product leadership roles at Microsoft and Amazon. Kieran has a PhD in linguistics from the University of Pennsylvania. Her work has appeared in Fortune, Re/code, Slate, and the Washington Post.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510085.26/warc/CC-MAIN-20230925183615-20230925213615-00556.warc.gz
CC-MAIN-2023-40
5,070
12
https://boards.greenhouse.io/codility/jobs/4576775003
code
Senior Site Reliability Engineer The Codility Mission The world needs more problem-solving capacity, so at Codility we’re redefining how some of the most influential software engineering teams in the world are built. Using our platform, candidates are evaluated on an even playing field based on the technical skills that matter so companies can make better hiring decisions without wasting engineering time. Over 10 years we’ve helped thousands of engineering leaders build high-performing, diverse teams, and evaluated the problem-solving skills of over 12 million job candidates. 2020 drove our customers online putting our platform at the forefront of remote-first hiring, helping companies like Microsoft, Tesla and Slack evolve their approach. We are driving a shift in how world class engineering teams are formed and have a bold vision for the future of tech hiring. As a result of our continued success, we’re growing and expanding our very own Engineering department and have this exciting opportunity. Location: Poland, Germany, UK - please note at the current time we're unable to issue any sponsorship. We are forming our dedicated Data Engineering Team. So you will be someone who relishes the challenge of building a data platform from the ground up and can work at scale in a dynamic environment. You will act as a connector between our data teams and the rest of our diverse engineering department to create high-performance data processing solutions that are significant in the future growth of Codility. What you will be doing with us: - Scaling, building, and operating platform supporting our SaaS application (i.e., Kubernetes, Istio, ArgoCD, Prometheus stack and EFK, CI/CD with Gitlab) - Together we work on foundations for effective software delivery of the Codility platform and the platform's reliability plus security. You will be a great fit for this role if you have: - Proficiency in AWS or other cloud provider, Terraform, and Kubernetes is a must - Experience with Istio, ArgoCD or Terragrunt is a plus - Programming skills necessary for building automations in Python or Golang - Experience in building, operating and scaling a SaaS platform is a huge plus. Codility Tech Stack: - Frontend: ES2018, Typescript, React, Redux, styled-components, Jest, React Testing Library - Backend: Python3, Django, Golang - Database: PostgreSQL, Redshift, DynamoDB - Continuous Deployment/Tools: Gitlab/GitlabCI, PyCharm/VSCode - Infrastructure: Kubernetes (EKS), Istio, Prometheus monitoring stack, EFK, Terraform, Terragrunt, AWS, Chef. Benefits @ Codility Codility believe in a people first culture, reflected in our core benefits package: - Remote first culture with hubs in San Francisco, London, Warsaw, and Berlin - You choose where you want to work be it 100% from home or in an office - 27 days of PTO globally with generous Holiday allowance and four additional mental health days designated for mental wellbeing - Employee incentive stock options - Company retirement match - Robust physical and mental health benefits - Investment in your ongoing development through our learning fund - Culture of trust, empowerment, and inclusion Disclaimer: At Codility, we know that great work isn’t done without a phenomenal team. We are always looking to hire the absolute best talent and recognize that diversity in our experiences and backgrounds is what makes us stronger. We insist on an inclusive culture where everyone feels safe to contribute and help us innovate. We hire candidates of any race, color, ancestry, religion, national origin, sexual orientation, gender identity, age, marital or family status, disability, or veteran status. These differences are what enable us to work towards the future we envision for ourselves, our product, our customers, and our world
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.60/warc/CC-MAIN-20210730122926-20210730152926-00676.warc.gz
CC-MAIN-2021-31
3,807
32
http://blog.port3101.org/pinjo/136-interview-author-blackberry-developer-john-wargo.html
code
Interview with Author and BlackBerry Developer John Wargo There has been a ton of publicity lately about new Blackberry devices on the horizon, OS 5.0 and the BlackBerry Widget SDK. However, the upcoming BlackBerry Developers Conference got me thinking about John Wargo’s book. For those of you who are not familiar with John, he is a former employee of Research in Motion who had the responsibility of educating RIM customers on developing applications for the BlackBerry platform. At WES this past May, John mentioned that he was writing a book called “BlackBerry Development Fundamentals” and to expect its release in the fall. I found his flier in my WES schwag pile (yes, I still have one) and checked out his website http://www.bbdevfundamentals.com where I found out that the book is going to be officially released on November 9. I got in contact with John in mid-October and he was gracious enough to sit down for an interview and give me the lowdown on his book. Q: Thank you for sitting down with me today John. Let’s start off by telling me about the book. Well, the book is all about BlackBerry development. Inside it is everything a developer needs to get started with BlackBerry development. The book is laid out the way I would do a complete course on BlackBerry development – starting with the basics and working through all of the topics. The book uses the information in early chapters to enhance subsequent chapter content. It’s not a Java book although there are 6 chapters dedicated to Java development and Java development tools. It covers the foundations like data connection paths and MDS, three chapters on Push, Browser applications (BlackBerry browser capabilities, developing browser applications for BlackBerry and testing/debugging browser applications), Java development for BlackBerry (including how to use the tools, debug/test and deploy applications) and more. It doesn’t assume you want to do Java development, which most people think is the only option, but tries to give you the fundamentals on all aspects of the topic. The THREE chapters on Application Data Push presents the topic differently than it has ever been covered before – anywhere! Q: What prompted to write your book and why do you feel that writing this book was important? When I joined Research In Motion, I had to quickly come up to speed on BlackBerry development in order to be able to do my job. I had a ton of development experience (more than 20 years experience as a professional developer and a couple of award winning commercial software products under his belt) but I struggled to find the information I needed to learn BlackBerry development. Don’t get me wrong, there’s a lot of really good information on the BlackBerry Developer’s web site (http://www.blackberry.com/developers), but you have to kinda already know what you’re trying to do to use the information that’s up there. Even the overviews of the BlackBerry Enterprise Server (BES) and Mobile Data System (MDS) were full of marketing speak and feature lists when what I really wanted to know was not only what it did but how and why it did it. The how’s and why’s of any technology are an important part of how I learn. When looking at the various development tool options, the BlackBerry Java Development Environment (JDE), BlackBerry MDS Studio (now with an announced end of life scheduled for December 31, 2009) and Browser development, nowhere was there a document that compared and contrasted the options to you had help deciding on which one to use for a new application project. So much of the information was buried deep inside the developer knowledge base, which wasn’t really searchable, or in the heads of the RIM old timers. I was able to get the information I needed, but I had to either spend a lot of time browsing around until I found it or bug the internal developers I knew until I got the information. I learned what I needed to learn, but it just wasn’t easy. I spent a lot of time in the field working with Research In Motion customers – educating them on the capabilities of the platform and the tools and quickly learned that the corporate developer community was having the same difficulty I was having. Research In Motion’s ISV partners had dedicated support resources at their disposal, but ‘Frank the Developer’ working in an office somewhere or the hobbyist dipping his feet in the BlackBerry waters was a bit overwhelmed by the size of the ocean. The book is important because there’s really only source of information for BlackBerry developers, everything on the BlackBerry Developer web site, and there needed to be a different perspective. There needed to be something that covered the options from beginning to end and designed for the beginning and moderately experienced developer. The experienced BlackBerry Java developers don’t need it. Q: How do you feel it compares to others on the market? Differs? What hole does it fill? There aren’t any other BlackBerry Development-only books in the market right now, so I’d say it compares rather favorably. I’d even be willing to say it’s the best book of its kind on the market. Seriously though, there are only a couple of books on the market that touch upon BlackBerry development topics, but none actually dedicated to BlackBerry development. It’s been a while since I looked at it, but Craig J. Johnston’s Professional BlackBerry is now four years old and covered administrator, user and development related topics. The same thing can be said for O’Reilly’s BlackBerry Hacks; it came out about the same time and is primarily an end-user book. It has a couple of ‘hacks’ dedicated to BlackBerry development topics, but it can’t be called a developer book. BlackBerry Development Fundamentals is dedicated to BlackBerry development only and goes into much greater detail. It is interesting to note though that all of a sudden there are a lot of BlackBerry books coming out about the same time as mine; there are even a couple more development books on their way. Anthony Rizk’s book called Beginning BlackBerry Development is due out in November. It’s a book for Java developers, as it covers only Java development and is written as a series of tutorials. I’ve had some conversations with Anthony and you should see that the two books complement each other nicely. There’s Advanced BlackBerry Development, which is a sort of sequel to Anthony’s book, and a new Dummies book called BlackBerry All-in-One For Dummies coming that is going to be nine books in one and cover a wide range of topics, but probably will not be very developer focused. Both are expected out in early 2010. By the way, because of these books and more coming out soon I decided to build a site to act as a clearing house for information on all blackberry books, it’s at http://www.blackberrybooks.org and I’ll start populating it with information as soon as I really wrap this one up. I’ll post information about all available BlackBerry books, allow people to write reviews, comments and even rate books. I’ll also let authors write their own profiles about their books – let them talk about them more conversationally than they can on the back cover. NOTE: Anthony Rizk’s book Beginning BlackBerry Development has been released since this interview was conducted. Q: In your past life with RIM, you assisted many companies in training BlackBerry developers. What did you see or learn from those companies in regards to vision and preparedness? I saw a lot of different things in the companies I worked with. All of them saw that mobile applications were coming, but many of them really didn’t know what to do about it. The people I worked with knew that the organization would have a need for mobile applications, but in many cases didn’t yet see the executive sponsorship needed to move it along. In most cases, there wasn’t a dedicated mobile development tea, which really is a requirement, and companies were struggling with how to actually do the development. Companies had development teams, but few with the skills to cover the mobile part of it. This is actually the book’s sweet spot and probably the biggest reason I wrote the book, sorry for the shameless plug. Many companies expected that BlackBerry development was proprietary in some way, bought into the Microsoft Fear Uncertainty and Doubt (FUD) about the platform. They assumed that since the platform was proprietary and the BES was proprietary you somehow had to do ‘special’ development to work on the BlackBerry platform. That was true for MDS Runtime development, but it is definitely not true for browser and Java development as I highlight in the book. Experienced mobile developers were ready to jump in, but people just getting started just didn’t know where to get started. If I’d speak with a customer and explain to them all of the capabilities of the platform and the free tools they could use to build these apps, they would listen intently and take a bunch of notes but when I’d visit them months later, nothing would have happened. If I’d been able to hand them a book, rather than pointing them to a web site, I’m certain more would have actually happened. For many companies, building Java applications for BlackBerry wasn’t a problem for them especially since so many developers are leaving college knowing Java. Other companies though, because of their adoption of Microsoft technologies, had issues. They usually wanted to try to support both Windows Mobile and BlackBerry, but because of the technology chasm would have to either use the browser for these apps or build and maintain two versions of the application; something .Net such as Visual Basic or C# and another one in Java. This is probably the biggest problem Research In Motion needs to solve... and I’m not sure how hard they’re trying to solve it. Even though Java will run on both platforms, for some off reason people just don’t think to write a JME application and run it on both. They continue to ‘see’ BlackBerry and Windows Mobile being separate, incompatible systems when in reality they aren’t. Q: Do you feel companies are behind in BlackBerry Development? Yes, I definitely do. I have talked with a lot of customers about their future plans and surprisingly BlackBerry applications in most cases are still far off in the future. Even though these organizations are receiving real, measurable business value from these mobile devices, they’re not quite ready to start deploying mobile applications, which I believe will only increase the value of BlackBerry. In today’s economy, getting more value out of something you’re already paying for is very important. BES 5.0 is going to do a lot to help resolve this issue. With BES 4.x it’s just way too hard to manage deploying multiple applications to a large audience. Due to some limitations of the BES, the number of BlackBerry Software Configurations an administrator would need to create to manage deployment of several applications is mind boggling. With BES 5.0 those limitations all go away and it works like it should have worked all along. Q: How has BlackBerry Development changed over the past few years? Has it grown due to need (keep up with the Jones') or want (customer or employee offerings)? Research In Motion has been very smart in not trying to do too much at one time. They continue to add very powerful features to the device, both in hardware and API’s, so there’s always something new and sexy to do with the device, but nothing that dramatic. They’ve maintained their strong support of standards and let the market help drive what the BlackBerry should do from an application development standpoint. This isn’t a bad thing, but I’d expect RIM to just continue to follow standards as much as possible and let the market drive what a device can do. They’ll continue to innovate in hardware and technology, but I’m not expecting anything earth shattering from them on the development side. We’re finally going to be seeing Adobe Flash on a BlackBerry and It would be VERY interesting to see a mono (http://mono-project.com) implementation on BlackBerry, but I just don’t see that ever happening...at least in my lifetime. Q: Do you see any Development trends going forward? Everything is going to be done in the browser. The iPhone has done several things in/to the market: they’ve shown how cool apps can be on a mobile device (this has actually helped RIM) but they’ve ignored many standards,such as Java and mobile markup languages. In order for an organization to be able to support multiple platforms (which they are just going to HAVE to do, no choice) mobile applications will have to be created using the only technology that all mobile applications support and that’s plain, old HTML. Q: What do you see as an incentive for people learning BlackBerry or Mobile App development as a whole? Mobile is the future. While predicted a long time ago, I’m starting to regularly see that executives and others are no longer being equipped with laptops. They probably use a PC at the office, but when mobile they’re using or expected to use a smartphone. Nobody wants to carry two or more devices anymore, so the mobile device is going to win. Developers with mobile development skills are going to be highly sought after. Q: What is RIM doing to help Developers? They’re recognizing that some of their existing documentation just doesn’t address the needs of the beginning developer so they’re revamping the web site, adding labs, video demonstrations and updating their documentation to make it easier to understand. They’ve added the BlackBerry Developer’s Blog (http://supportforums.blackberry.com/.../bg-p/dev_blog) and have been posting some pretty interesting articles up there. They’re writing more white papers and knowledge base articles to help educate new developers. They’re supposed to be or have, but I haven’t really tested it yet, FINALLY updating their knowledge base so search actually returns the information you need. They’re not sitting still. I’ve had quite a few conversations with Mike Kirkup about the developer web site and associated documentation and they clearly want to make it better for everyone. Q: What is RIM not doing to help Developers? The only complaint I have -and it’s the distilling of a whole bunch of things- is that from what I can tell, RIM collectively has this view in its head that it knows what developers need and makes decisions based upon that view. Unfortunately that approach really only works for ISV partners and experienced mobile developers. In so much of the information available to developers, RIM has documented the ‘what’ of developer topics but completely misses the ‘why’ and sometimes the ‘how’ of the same topics. They need to provide more ‘why’ information – that would help developers who are just learning this stuff. Another thing they’re not doing, although they seem to be changing that, is using figures in their documentation. Too many times, all you get are a list of steps and nothing more. The documentation showing how to use the eJDE or MDS Studio would explain the steps: “Click the open button. Click Next. Click Next’ but anyone working through the document could easy get lost as they move from screen to screen. Adding figures to the walk-throughs allows the learner to follow the context of the demonstration – it’s too easy to not know which dialog you’re supposed to click ‘next’ in if you don’t have a screen shot to help you keep your place. Q: Final question and I’ll let you get out of here. Where can people get your book? You can get it online at your usual places: Amazon, Barnes & Noble, Borders and even Target. All of them have been taking pre-orders until the official release date. My publisher will also be onsite at the Blackberry Developers Conference next week selling copies and RIM also currently has me scheduled for a “Meet the Author” event. I think that’s on Thursday, but don’t hold me to that. I’m far from a developer, but I’ve been lucky enough to attend some of John’s BlackBerry Development sessions at past WES Conferences. Granted, I went for the free drinks and goodies at first, but I did stay to listen to the content, which I attribute to John’s dynamic speaking ability and his methods of teaching things for anyone to understand. I’m one of the many who have loathed learning Blackberry Development due to Java, but after talking with John, I’m excited to learn about all of the possible development methods that are not hardcore Java related If you’re lucky enough to attend the Developer’s Conference, and get a chance to say Hi to John while passing between sessions or at the “Meet the Author” event, please do. If you have never met John, he is super friendly and quite intelligent. Buy him a drink or two and he gets even friendlier. Total Comments 1 Posted 11-09-2009 at 09:20 PM by Otto
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121869.65/warc/CC-MAIN-20170423031201-00512-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
17,135
52
https://community.boomi.com/community/knowledge-center/blog/2017/01/12/january-2017-release-highlights
code
Process Route is now Generally Available Process Route is finally GA for all editions Professional and higher. As a refresher, the Process Route shape and component provide a way to dynamically choose which subprocess to execute based on a document value at runtime AND provide deployment independence between the main process, subprocess, and routing rules themselves. If you haven't explored this feature yet, definitely take a good look at it because it can dramatically change the way you think about and design processes. Read this overview then go through this tutorial. And for those of you anxiously awaiting this feature, here are a few of the incremental improvements since originally released as Tech Preview last June: - Return Path execution sequence control - Support for Test Mode - Enforce unique Keys and Return Paths - Improved runtime error messages if Process Route or subprocess are not deployed - Optional labels for route Keys - Ability to deploy Process Route component via the API This video showcases two areas of UI improvement: - Database import wizard - Updated Atom Chooser, multi-select tables and columns, retain context for tables, select condition fields for dynamic update profiles - Process Library - Filter by publisher, ability to create new destination folder upon install Deploy Component API In addition to processes you can now deploy other component types such as Process Routes, Certificates, and APIs programmatically via the AtomSphere platform API. This allows you to automate more deployments as part of your DevOps/development life cycle workflow. The Deployment API has been "genericized" and there are two new APIs, ComponentEnvironmentAttachment and ComponentAtomAttachment. Check out Automating Deployments with the AtomSphere API (updated) to see these APIs in action. Process Library Features For consumers/end users: - Filter by Publisher to find processes published by Dell Boomi vs. another parent account publisher (typically a partner) - Create new destination folder "on the fly" - Publisher tab - In Setup, there's a new "Publisher" tab to separate publisher information from the Connector SDK Developer tab. This allows you to configure your publisher info if you are only sharing processes or integration packs but not developing custom connectors. Note: Publisher information is required to be able to publish processes. You will be prompted to complete the form if you haven't already. Dell Boomi Process Library New processes published in the Process Library: - Recipe: Zip and Unzip Files (article) - AtomSphere Partner API: Provision Account and Integration Pack (article) - Amazon SQS (Customer idea!) - Easily integrate with Amazon's SQS cloud-based message queuing service. Read all about it in the Deep Dive: January 2017 Release Deep Dive: Amazon SQS Connector. - JMS Connection Pooling (Customer idea!) - Connection pooling has been added to improve performance by reusing connections. - Low Latency mode for Atom Queue listeners (Customer idea!) - Now Atom Queue listener processes can execute super-fast in Atom Cloud runtimes. - Disk Exact Match Type - It's not every day the trusty Disk connector gets an update! There's a new filter match type to read a file based on exact file name match instead of wildcard. If you know the exact file, this is much faster when looking for a specific file in a directory with many files. - Amazon S3 now supports multipart uploads. - Database Import wizard - Several UX improvements including table and column selection and Condition selection for Dynamic Update profiles. See UX video above. (Customer idea!) Connector SDK - Custom Operation Actions When developing connectors, you can now define a custom label to best describe the actions supported by your connector. No longer are you bound to GET, QUERY, CREATE, UPDATE, UPSERT, DELETE, and EXECUTE. Provide whatever label is most applicable or consistent with the given application or API: Attach, Read, Ship, Teleport...you name it, literally. Configure the @customTypeID and @customTypeLabel attributes in the connector descriptor file. Navigate to Setup > Developer to download the latest SDK version (v1.2.14). Master Data Management Source Ranking is now Generally Available Source Ranking is now GA for all MDM customers. This feature, originally introduced as Tech Preview last August provides MDM administrators the ability to govern which sources are/are not allowed to overwrite other sources, by ranking them by order of trust. Read more about Source attachment and configuration.
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211935.42/warc/CC-MAIN-20180817084620-20180817104620-00075.warc.gz
CC-MAIN-2018-34
4,563
37
http://ukhas.org.uk/doku.php?id=projects:hab_modem
code
Android HAB Modem and Tracker This page is a work in progress. Download the app, and for more information, see here. App seems to no longer be on Google Play, but is available here. Updated version available here. To get audio data into the tablet or phone, the microphone input on the device is used. This is usually a 4 pole combined headphones and mic connector, and so a special cable is needed, detailed below Audio In Cable Parts needed (parts available from most places, not just maplin): - Some wire or shielded audio cable - Some resistors (eg 1K & 22K - see below) This page shows the pinout for the most common layout of 4 pole connector, along with a list of phones for which it applies, but is not a complete list. If you can buy a headphone/mic headset accessory for your phone that has a 3.5mm connector then it will support audio in via its input (it is possible that some devices will switch the mic and gnd connections - if you connect them the wrong way round the resistors will prevent damage to the radio or phone) The basic layout for the cable is shown below. The connector layouts shown are for the connectors linked to above. If you have different connectors use the continuity setting on a multimeter to be sure of the layout of the connector. The colouring on the 4 pole connector shows what is connected to what The resistors form a potential divider as the radio output is typically designed to drive headphones, and so the resistors reduce the amplitude to something expected by a mic input. Some devices also need a low (1K-2K) resistance between mic and gnd to detect the microphone Typical values of resistor are: R1 = 22K and R2 = 1K, although some experimentation may be needed if these attenuate the signal too much, or if the audio input is not detected by the device. [Note, Kevin Walton, 04/08/13, Samsung Galaxy S2 did not recognise the mic with a R2 as 1K, but did recognise a mic had been plugged in with R2 as 1.5K, it also required a ferrite filter to reduce the noise.] Some phones appear to throw out a lot of RF noise on the audio cable which can cause interference with the radio. Adding a ferrite ring on the cable at the phone side should solve the issue. (Thanks to Jon G8KNN for pointing this out) Source code is available here:
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817128.7/warc/CC-MAIN-20240417013540-20240417043540-00422.warc.gz
CC-MAIN-2024-18
2,280
16
https://discuss.ipfs.tech/t/enabling-file-streaming-option-while-adding-large-file/10203
code
Is it possible to allow files streaming while adding file to ipfs with go-ipfs , similar to what we have for js.ipfs [add.ALL], where it take fileStream as input parameter? js-ipfs/FILES.md at master · ipfs/js-ipfs · GitHub Here is my go-ipfs code that saves to IPFS from a stream: see “writeFromStream” function That’s Java code but it’s using the HTTP API on a GO-IPFS docker instance. Hope it helps. Thanks a lot wclayf . It is very helpful.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00276.warc.gz
CC-MAIN-2022-40
454
6
http://sonkagames.com/
code
SONKA is a Polish game developer and publisher. Having long experience with Nintendo platforms, we are now focused on bringing high-quality ports to Nintendo Switch by: – optimising the game, Are you a developer? Looking for bringing your game to Nintendo Switch? We can port and publish your game, doesn’t matter if it’s Unity, UE or custom engine. Do you want to do the work yourself? We can also help you with the process by lending devkits, licensing our tools, QA and everything else Nintendo related. We’re hiring. If you’re an experienced Unity programmer looking for work in Warsaw, contact us too.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00802.warc.gz
CC-MAIN-2023-06
616
7
https://cypressdiscgolf.com/products/amicoson-pickleball-paddleset
code
Amicoson Pickleball Paddle (x2) - 【EXCELLENT FOR BEGINNERS & FAMILIES】Reliable and stylish, this paddle is the perfect entry point for any up and coming pickleball player. Wooden paddles are commonly used in many schools and local clubs as excellent value options to bring new players into the game. Committed to bringing the game of Pickleball to a larger. - 【HIGH QUALITY PADDLES】The paddles feature durable 7-ply wood construction made from maple for plenty of power and durability. The handle with a comfortable cushion grip with a safety wrist strap, has superior grip and padding for competitive play while still allowing the player superior comfort. - 【Comfortable Grip】The pickleball paddle could be nicely handled. Each pickleball paddle features a specially designed grip that minimize slipping while maximizing the balance. Perforated, sweat absorbent cushioned grip of the paddle help you enjoy longtime play without fatigue. - 【PERFECT SIZE & HIGH PERFORMANCE】Dimensions: 15.35"X7.5", Grip Circumference: approx 4-1/4", Weight:269-290g (9.5-10oz). Lightweight compared to other clunky wood pickleball paddles, providing you with ball precise control and swing strength. Large Spot which means that our wooden pickleball set has more surface for catching the balls.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100499.43/warc/CC-MAIN-20231203094028-20231203124028-00205.warc.gz
CC-MAIN-2023-50
1,293
5
https://rebelgame.xyz/collecting-your-opinions-on-the-heroes-and-where-they-stand-cast-your-vote/
code
Here is the survey. Complete it if you want; I may or may not organize the data later on. Google does do an ok job of organizing it though! I will begin commenting results once I have reached 50, 100, 150, and 200 responses. Any more than 200 will be included in a separate thread, with organized data. This doesn't mean I won't organize it if it doesn't have over 200, though… EDIT: It would seem that this concern was invalid; 1650 responses and still going strong. This is some quality data; I'm excited to publish a thread on this tomorrow once I'm done sorting through it all. I'll leave it open overnight though. EDIT: Based on the comments section, I've drafted a second survey. https://docs.google.com/forms/d/e/1FAIpQLSetsSt9sx1eNEYh-NumX8efAcRq92KwLWbjpsx1ZvIvKk5b5g/viewform Feel free to respond to it as well!
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828501.85/warc/CC-MAIN-20181217091227-20181217113227-00607.warc.gz
CC-MAIN-2018-51
823
5
http://24ways.org/vote/down/c216/
code
Jump to content Veracon: Unnecessary in what sense? Compared to the old table-based way of doing things, I think an extra span is a small price to pay. Of course, your semantic mileage may vary. Patrick: thanks for the Firefox 1.0.7 note. I’ll take a look at that, but I’m guessing there’s some weird pixel rounding error at play there. Mark: Fair enough, and you’re obviously more than welcome to use that approach. I normally wouldn’t put any browser-specific hacks in my core stylesheet, but I thought it the best approach for the purposes of a quick write-up. Thanks for the feedback, all!
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00009-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
603
5
https://www.experts-exchange.com/questions/21193837/Why-does-my-USB-keyboard-disappear-after-loading-my-USB-CD-ROM-driver.html
code
I'm upgrading windows based instruments from 98 to XP. We don't have an internal CD-ROM drive so I'm using a USB DVD (Sony DRX-710UL). We need to repartition to include a restore image and I'd like to just boot up a diskette copy over the restore image and then used a restore diskette to re-establish the C: partition. I can do this all with NET USE instead of the CD-ROM in the lab with no problem. I've created a boot diskette to load the USB CDROM device driver using, MSCDEX.EXE /D:USBCD001 /L:Q When I boot the driver is loaded the drive found and assigned Q: it then returns to the command prompt but my USB keyboard (which is enabled in the BIOS) is now dead. How can I get around this?
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589752.56/warc/CC-MAIN-20180717144908-20180717164908-00586.warc.gz
CC-MAIN-2018-30
694
3
http://thefashionmenue.blogspot.com/2013/08/new-header.html
code
don't know if some of you already recognized, but i have a new header. i'm still not satisfied with it, but for now i think it's the right one^^ there's something wrong with the color. as you can see on the picture in this post the background is supposed to be white, but it appeares in some kind of grey in the header. does anybody has an idea what went wrong? i'm thankfull for any hint or advice!
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123635.74/warc/CC-MAIN-20170423031203-00202-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
399
2
https://meta.stackexchange.com/questions/69669/why-doesnt-stackexchange-com-have-information-about-stack-exchange
code
The first few times you go to stackexchange.com, you see the following: ##What is this?## Stack Exchange is a network of free, community-driven Q&A sites. We highlight and aggregate the best recent content from our entire network here. Log in to create tag sets to view questions on subjects that interest you. Wish we had Q&A on a different topic? Help us create new Q&A sites through the open, democratic process at Area 51. For more information, check out the blog or read more about But right now we don't really see stackexchange.com as the entrance point to the network. Most people will find out about Stack Exchange by visiting one of the actual Q&A sites, and only later realize that it's part of a larger network. Putting it another way, if I were telling someone about Stack Exchange, I wouldn't send them to stackexchange.com, I'd send them to Stack Overflow, or Gaming, or Cooking, etc. StackExchange.com is more for power users of the network, to give them tools to manage all of the sites they're interested in with the Hot Questions page and Tag Sets, or track their placement in the Reputation Leagues. Joel also wrote a very nice about page a few weeks ago.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057417.10/warc/CC-MAIN-20210923044248-20210923074248-00396.warc.gz
CC-MAIN-2021-39
1,175
17
https://deepai.org/publication/serial-speakers-a-dataset-of-tv-series
code
For over a decade now, tv series have been drawing increasing attention. In 2019, the final season of Game of Thrones, one of the most popular tv shows these past few years, has averaged 44.2 million viewers per episode; many tv series have huge communities of fans, resulting in numerous online crowdsourced resources, such as wikis111gameofthrones.wikia.com/wiki/Game_of_Thrones_Wiki, dedicated forums222asoiaf.westeros.org/index.php?/forum, and YouTube channels. Long dismissed as a minor genre by the critics, some recent tv series also received critical acclaim as a unique space of creativity, able to attract even renowned full-length movie directors, such as Jane Campion, David Fincher or Martin Scorsese. Nowadays, tv series have their own festivals333In France, Series Mania.. For more than half of the people444194 individuals, mostly students from our university, aged 23.12 5.73. we polled in the survey reproduced in [Bost2016], watching tv series is a daily occupation, as can be seen on Fig. (a)a. Such a success is probably related to the cultural changes caused by modern media: high-speed internet connections led to unprecedented viewing opportunities. As shown on Fig.(b)b, television is no longer the main channel used to watch “tv” series: most of the time, streaming and downloading services are preferred to television. Unlike television, streaming and downloading platforms give control to the user, not only over the contents he may want to watch, but also over the viewing frequency. As a consequence, the typical dozen of episodes a tv series season contains is often watched over a much shorter period of time than the usual two months it is being broadcast on television. As can be seen on Fig. (a)a, for almost 80% of the people we polled, watching a tv series season (about 10 hours in average) never takes more than a few weeks. As a major consequence, tv series seasons, usually released once a year, are not watched in a continuous way. For some types of tv series, discontinuous viewing is generally not a major issue. Classical tv series consist of self-contained episodes, only related with one another by a few recurring protagonists. Similarly, anthologies contain standalone units, either episodes (e.g. The Twilight Zone) or seasons (e.g. True detective), but without recurring characters. However, for tv serials, discontinuous viewing is likely to be an issue: tv serials (e.g. Game of Thrones) are based on highly continuous plots, each episode and season being narratively related to the previous ones. Yet, as reported on Fig. (b)b, tv serials turn out to be much more popular than classical tv series: nearly 2/3 of the people we polled prefer tv serials to the other types, and 1/4 are more inclined to a mix between the classical and serial genres, each episode developing its own plot but also contributing to a secondary, continuous story. As a consequence, viewers are likely to have forgotten to some extent the plot of tv serials when they are, at last, about to know what comes next: nearly 60% of the people we polled feel the need to remember the main events of the plot before viewing the new season of a tv serial. Such a situation, quite common, provides multimedia retrieval with remarkably realistic use cases. A few works have been starting to explore multimedia retrieval for tv series. Tapaswi2014a investigate ways of automatically building xkcd-style555xkcd.com/657 visualizations of the plot of tv series episodes based on the interactions between onscreen characters. Ercolessi2012a explore plot de-interlacing in tv series based on scene similarities. Bost2019 made use of automatic extractive summaries for re-engaging viewers with Game of Thrones’ plot, a few weeks before the sixth season was released. Roy2014 and Tapaswi2014b make use of crowdsourced plot synopses which, once aligned with video shots and/or transcripts, can support high-level, event-oriented search queries on tv series content. Nonetheless, most of these works focus either on classical tv series, or on standalone episodes of tv serials. Due to the lack of annotated data, very few of them address the challenges related to the narrative continuity of tv serials. We aim at filling this gap by providing the multimedia/speech processing research communities with Serial Speakers, an annotated dataset focusing on three American tv serials: Breaking Bad (seasons 1–5 / 5), Game of Thrones (seasons 1–8 / 8), House of Cards (seasons 1–2 / 6). Besides multimedia retrieval, the annotations we provide make our dataset suitable for lower level tasks in challenging conditions (Subsection 3.1). In this paper, we first describe the few existing related datasets, before detailing the main features of our own Serial Speakers dataset; we finally describe the tools we make available to the users for reproducing the copyrighted material of the dataset. 2 Related Works These past ten years, a few commercial tv series have been annotated for various research purposes, and some of these annotations have been publicly released. We review here most of the tv shows that were annotated, along with the corresponding types of annotations, whenever publicly available. Seinfeld (1989–1998) is an American tv situational comedy (sitcom). Friedland2009 rely on acoustic events to design a navigation tool for browsing episodes publicly released during the acm Multimedia 2009 Grand Challenge. Buffy the Vampire Slayer (1997–2001) is an American supernatural drama tv series. This show was mostly used for character naming [Everingham et al.2006], face tracking and identification [Bäuml et al.2013], person identification [Bäuml et al.2014], [Tapaswi et al.2015b], story visualization [Tapaswi et al.2014b], and plot synopses alignment [Tapaswi et al.2014a]666Visual (face tracks and identities) and linguistic (video alignment with plot synopses) annotations of the fifth season can be found at cvhci.anthropomatik.kit.edu/mtapaswi/projects-mma.html. Ally McBeal (1997–2002) is an American legal comedy-drama tv series. The show was annotated for performing scene segmentation based on speaker diarization [Ercolessi et al.2011] and speech recognition [Bredin2012], plot de-interlacing [Ercolessi et al.2012b], and story visualization [Ercolessi et al.2012a]777Annotations (scene/shot boundaries, speaker identity) of the first four episodes are available at herve.niderb.fr/data/ally_mcbeal. The Big Bang Theory (2007–2019) is also an American tv sitcom. Six episodes were annotated for the same visual tasks as those performed on Buffy the Vampire Slayer: face tracking and identification [Bäuml et al.2013], person identification [Bäuml et al.2014], [Tapaswi et al.2015b], and story visualization [Tapaswi et al.2014b]. Tapaswi2012 also focus on speaker identification and provide audiovisual annotations for these six episodes888cvhci.anthropomatik.kit.edu/mtapaswi/projectspersonid.html. In addition to these audiovisual annotations, Roy2014 publish in the tvd dataset other crowdsourced, linguistically oriented resources, such as manual transcripts, subtitles, episode outlines and textual summaries999tvd.niderb.fr/. |Speech duration (ratio in %)||# speech turns||# speakers| |1||02:01:19 (36)||03:32:55 (40)||04:50:12 (45)||4523||6973||11182||59||115||126| |2||03:42:15 (38)||03:33:53 (41)||05:07:16 (48)||8853||7259||11633||86||127||167| |3||03:42:04 (38)||03:30:01 (39)||_ (_)||7610||7117||_||85||115||_| |4||03:38:08 (37)||03:11:28 (37)||_ (_)||7583||6694||_||70||119||_| |5||04:40:03 (38)||02:55:32 (33)||_ (_)||10372||6226||_||92||121||_| |6||_ (_)||02:48:48 (32)||_ (_)||_||5674||_||_||149||_| |7||_ (_)||02:13:55 (32)||_ (_)||_||4526||_||_||66||_| |8||_ (_)||01:27:17 (21)||_ (_)||_||3141||_||_||50||_| |Total||17:43:53 (38)||23:13:52 (35)||09:57:29 (46)||38941||47610||22815||288||468||264| Game of Thrones (2011–2019) is an American fantasy drama. Tapaswi2014a make use of annotated face tracks and face identities in the first season (10 episodes). In addition, Tapaswi2015b provide the ground truth alignment between the first season of the tv series and the books it is based on101010cvhci.anthropomatik.kit.edu/mtapaswi/projectsbook_align.html. For a subset of episodes, the tvd dataset provides crowdsourced manual transcripts, subtitles, episode outlines and textual summaries. As can be seen, many of these annotations target vision-related tasks. Furthermore, little attention has been paid to tv serials and their continuous plots, usually spanning several seasons. Instead, standalone episodes of sitcoms are overrepresented. And finally, even when annotators focus on tv serials (Game of Thrones ), the annotations are never provided for more than a single season. Similar to the computer visionaccio dataset for the series of Harry Potter movies [Ghaleb et al.2015], our Serial Speakers dataset aims in contrast at providing annotations of several seasons of tv serials, in order to address both the realistic multimedia retrieval use cases we detailed in Section 1, and lower level speech processing tasks in unusual, challenging conditions. |Duration (# episodes)| |1||05:32:44 (7)||08:58:28 (10)||10:48:15 (13)| |2||09:51:08 (13)||08:41:56 (10)||10:37:10 (13)| |3||09:49:40 (13)||08:52:04 (10)||_ (_)| |4||09:46:16 (13)||08:41:05 (10)||_ (_)| |5||12:15:36 (16)||08:56:50 (10)||_ (_)| |6||_ (_)||08:55:43 (10)||_ (_)| |7||_ (_)||06:58:54 (7)||_ (_)| |8||_ (_)||06:48:31 (6)||_ (_)| |Total||47:15:26 (62)||66:53:34 (73)||21:25:26 (26)| 3 Description of the Dataset Our Serial Speakers dataset consists of 161 episodes from three popular tv serials: (denoted hereafter bb), released between 2008 and 2013, is categorized on Wikipedia as a crime drama, contemporary western and a black comedy. We annotated 62 episodes (seasons 1–5) out of 62. Game of Thrones (got) has been introduced above in Section 2 We annotated 73 episodes (seasons 1–8) out of 73. House of Cards (hoc) is a political drama, released between 2013 and 2018. We annotated 26 episodes (seasons 1–2) out of 73. Overall, the total duration of the video recordings amounts to 135 hours (135:34:27). Table 2 details for every season of each of the three tv serials the duration of the video recordings, expressed in “HH:MM:SS”, along with the corresponding number of episodes (in parentheses). 3.1 Speech Turns As in any full-length movie, speech is ubiquitous in tv serials. As reported in Table 1, speech coverage in our dataset ranges from 35% to 46% of the video duration, depending on the tv series, for a total amount of about 51 hours. As can be seen, speech coverage is much more important (46%) in hoc than in bb and got (respectively 38% and 35%). As a political drama, hoc is definitely speech oriented, while the other two series also contain action scenes. Interestingly, speech coverage in got tends to decrease over the 8 seasons, especially from the fifth one. The first seasons turn out to be relatively faithful to the book series they are based on, while the last ones tend to depart from the original novel. Moreover, with increasing financial means, got progressively moved to a pure fantasy drama, with more action scenes. The basic speech units we consider in our dataset are speech turns, graphically signaled as sentences by ending punctuation signs. Unlike speaker turns, two consecutive speech turns may originate in the same speaker. The boundaries (starting and ending points) of every speech turn are annotated. During the annotation process, speech turns were first based on raw subtitles, as retrieved by applying a standard ocr tool to the commercial dvds. Nonetheless, subtitles do not always correspond to speech turns in a one-to-one way: long speech turns usually span several consecutive subtitles; conversely, a single subtitle may contain several speech turns, especially in case of fast speaker change. We then applied simple merging/splitting rules to recover the full speech turns from the subtitles, before refining their boundaries by using the forced alignment tool described in [McAuliffe et al.2017]. The resulting boundaries were systematically inspected and manually adjusted whenever necessary. Such annotations make our dataset suitable for the speech/voice activity detection task. Overall, as reported in Table 1, the dataset contains 109,366 speech turns. Speech turns are relatively short: the median speech turn duration amounts to 1.3 seconds for got, 1.2 for hoc, and only 1.1 for bb. As can be seen on Fig. (a)a , the statistical distribution of the speech turns duration, here plotted on a log-log scale as a complementary cumulative distribution function, seems to exhibit a heavy tail in all three cases. This is confirmed more objectively by applying the statistical testing procedure proposed by Clauset2009, which shows these distributions follow power laws. This indicates that the distribution is dominated by very short segments, but that there is a non-negligible proportion of very long segments, too. It also reveals that the mean is not an appropriate statistic to describe this distribution. By definition, every speech turn is uttered by a single speaker. We manually annotated every speech turn with the name of the corresponding speaking character, as credited in the cast list of each tv series episode. A small fraction of the speech segments (bb: 1.6%, got: 3%, hoc: 2.2%) were left as unidentified (“unknown” speaker). In the rare cases of two partially overlapping speech turns, we decided to cut off the first one at the exact starting point of the second one to preserve as much as possible its purity. Overall, as can be seen in Table 1, 288 speakers were identified in bb, 468 in got and 264 in hoc. With an average speaking time of 132 seconds by speaker, hoc contains more speakers than got (175 seconds/speaker), which in turn contains more speakers than bb (218 seconds/speaker). Fig. (b)b shows the distribution of the speaking time (expressed in percentage of the total speech time) for all speakers, again plotted on a log-log scale as a complementary cumulative distribution function. Once again, the speaking time of each speaker seems to follow a heavy-tailed distribution, with a few ubiquitous speakers and lots of barely speaking characters. This is confirmed through the same procedure as before, which identifies three power laws. If we consider that speaking time captures the strength of social interactions (soliloquies aside), this is consistent with results previously published for other types of weighted social networks [Li and Chen2003, Barthélemy et al.2005]. Nonetheless, as can be seen on the figure, the main speakers of got are not as ubiquitous as the major ones in the other two series: while the five main protagonists of bb and hoc respectively accumulate 64.3 and 48.6% of the total speech time, the five main characters of got “only” accumulate 25.6%. Indeed, got’s plot, based on a choral novel, is split into multiple storylines, each centered on one major protagonist. Moreover, even major, recurring characters of tv serials are not always uniformly represented over time. Fig. 4 depicts the lower part of correlation matrices computed between the speakers involved in every season of bb (Fig. (a)a) and got (Fig. (b)b): the distribution of the relative speaking time of every speaker in each season is first computed, before the Pearson correlation coefficient is calculated between every pair of season distribution. As can be seen, the situation is very contrasted, depending on the tv serial. Whereas the major speakers of bb remain quite the same over all five seasons (correlation coefficients close to 1, except for the very last, fifth one, with a few entering new characters), got exhibits quite lower correlation coefficients. For instance, the main speakers involved in the first season turn out to be quite different from the speakers involved in the other ones (average correlation coefficient with the other seasons only amounting to 0.56 0.05). Indeed, got is known for numerous, shocking deaths of major characters111111See got.show, for an attempt to automatically predict the characters who are the most likely to die next.. Moreover, got’s narrative usually focuses alternatively on each of its multiple storylines, but may postpone some of them for an unpredictable time, resulting in uneven speaker involvement over seasons. Fig. 6 depicts the relative speaking time in every season of the 12 most active speakers of got. As can be seen, some characters are barely present in some seasons, for instance, Jon (rank #4) in Season 2, or even absent, like Tywin (rank #12) in Seasons 5–8. Furthermore, as can be noticed on Fig. 6, the relative involvement of most of these 12 protagonists in Seasons 7–8 is much more important than in the other ones: indeed, Seasons 7–8 are centered on fewer speakers (respectively 66 and 50 vs. 124.3 11.8 in average in the first six ones). Speaker annotations make our dataset suitable for the speaker diarization/recognition tasks, but in especially challenging conditions: first, and as stated in [Bredin and Gelly2016], the usual 2-second assumption made for the speech turns by most of the state-of-the-art speaker diarization systems does no longer stand. Second, the high number of speakers involved in tv serials, along with the way their utterances are distributed over time, make one-step approaches particularly difficult. In such conditions, multi-stage approaches should be more effective [Tran et al.2011]. Besides, as noted in [Bredin and Gelly2016], the spontaneous nature of the interactions, the usual background music and sound effects heavily hurt the performance of standard speaker diarization/recognition systems [Clément et al.2011]. Though not provided with the annotated dataset for obvious copyright reasons121212Instead, we provide the users with online tools for recovering the textual content of the dataset from external subtitle files. See Section 4 for a description., the textual content of every speech turn has been revised, based on the output of the ocr tool we used to retrieve the subtitles. In particular, we restored a few missing words, mostly for bb, the subtitles sometimes containing some deletions. bb contains 229,004 tokens (word occurrences) and 10,152 types (unique words); got 317,840 tokens and 9,275 types; and hoc 153,846 tokens and 8,508 types. As the number of tokens vary dramatically from one tv serial to the other, we used the length-independent mtld measure [McCarthy and Jarvis2010] to assess the lexical diversity of the three tv serials. With a value of 88.2 (threshold set to 0.72), the vocabulary in hoc turns out to be richer than in got (69.6) and bb (64.5). More speech oriented, hoc also turns out to exhibit more lexical diversity than the other two series. 3.2 Interacting Speakers In a subset of episodes, the addressees of every speech turn have been annotated. Trivial within two-speaker sequences, such a task, even for annotators, turns out to be especially challenging in more complex conditions: most of the time, the addressees have to be inferred both from visual clues and from the semantic content of the interaction. In soliloquies (not rare in hoc), the addressee field was left empty. Not frequently addressed alone, the task of determining the interacting speakers is nonetheless a prerequisite for social network-based approaches of fiction work analysis, which generally lack annotated data to intrinsically assess the interactions they assume [Labatut and Bost2019]. Moreover, speaker diarization/recognition on the one hand, detection of interaction patterns on the other hand, could probably benefit from one another and be performed jointly. As an example, Fig. 5 shows the conversational networks based on the annotated episodes for each serial. The vertex sizes match their degree, while their color corresponds to their betweenness centrality. This clearly highlights the obvious main characters such as Walter White (bb) or Francis Underwood (hoc); but also more secondary characters that have very specific roles narrative-wise, e.g. Jaime Lannister who acts as a bridge between two groups of characters corresponding to two distinct narrative arcs. This illustrates the interest of leveraging the social network of characters when dealing with narrative-related tasks. 3.3 Shot Boundaries Besides speech oriented annotations, the Serial Speakers dataset contains a few visual annotations. For the first season of each of the three tv series, we manually annotated shot boundaries. A video shot, as stated in [Koprinska and Carrato2001], is defined as an “unbroken sequence of frames taken from one camera”. Transitions between video shots can be gradual (fade-in/fade-out), or abrupt ones (cuts). Most of the shot transitions in our dataset are simple cuts. The first seasons of bb, got, and hoc respectively contain 4,416, 9,375 and 8,783 shots, with and average duration of 4.5, 3.4 and 4.4 seconds. Action scenes in got are likely to be responsible for shorter shots in average. Shot boundary detection is nowadays well performed, especially when consecutive shots are abruptly transitioning from one another. As a consequence, it is rarely addressed for itself, but as a preliminary task for more complex ones. 3.4 Recurring Shots Shots rarely occur only once in edited video streams: in average, a shot occurs 10 times in bb, 15.2 in got and 17.7 in hoc. Most of the time, dialogue scenes are responsible for such shot recurrence. As can be seen on Fig. 8, within dialogue scenes, the camera typically alternates between the interacting characters, resulting in recurring, possibly alternating, shots. We manually annotated such recurring shots, based on similar framing, in the first season of the three tv series. As stated in [Yeung et al.1998], recurring shots usually capture interactions between characters. Relatively easy to cluster automatically, recurring shots are especially useful to multimodal approaches of speaker diarization [Bost et al.2015]. Besides, recurring shots often result in complex interaction patterns, denoted logical story units in [Hanjalic et al.1999]. Such patterns are suitable for supporting local speaker diarization approaches [Bost and Linares2014], or for providing extractive summaries with consistent subsequences [Bost et al.2019]. 3.5 Scene Boundaries Scenes are the longest units we annotated in our dataset. As required by the rule of the three unities classically prescribed for dramas, a scene in a movie is defined as a homogeneous sequence of actions occurring at the same place, within a continuous period of time. Though providing annotators with general guidelines, such a definition leaves space for interpretation, and some subjective choices still have to be made to annotate scene boundaries. First, temporal discontinuity is not always obvious to address: temporal ellipses often correspond to new scenes, but sometimes, especially when short, they hardly break the narrative continuity of the scene. |1||✓||✓||✓||✓||✓||✓||4, 6||3, 7, 8||1, 7, 11| Second, as shown on Fig. 9, scenes often open with long shots that show the place of the upcoming scene. Though there is no, strictly speaking, spatial continuity between the first shot and the following ones, they obviously belong to the same scene, and should be annotated as such. Finally, action homogeneity may also be tricky to assess. For instance, a phone call within a scene may interrupt an ongoing dialogue, resulting in a new phone conversation with another character, and possibly in a new action unit. In such cases, we generally inserted a new scene to capture the interrupting event, but other conventions could have been followed. Indeed, the choice of scene granularity remains highly dependent on the use case the annotators have in mind for annotating such data: special attention to speaker interactions would for instance invite to introduce more frequent scene boundaries. Overall, bb contains 1,337 scenes, with an average duration of 127.1 seconds; got 1,813 scenes (avg. duration of 132.6 seconds); hoc 1,048 scenes (avg. duration of 73.2 seconds). Once again, hoc contrasts with the two other series, with many short scenes. shows the joint distribution of the number of speakers by scene and the duration of the scene. For visualization purposes, the joint distribution is plotted as a continuous bivariate function, as fitted by applying kernel density estimate. As can be seen from the marginal distribution represented horizontally above each plot, the number of speakers in each scene remains quite low: 2 in average in bb, 2.1 in hoc, and a bit more (2.4) in got. Besides, the number of characters in each scene, except maybe in got, is not clearly correlated with its duration. Moreover, some short scenes surprisingly do not contain any speaking character: most of them correspond to the opening and closing sequences of each episode. Finally, the short scenes of hoc generally contain two speakers. Table 3 provides an overview of the annotated parts of the Serial Speakers dataset, along with the corresponding types of annotations. In the table, “Speech turns” stand for the annotation of the speech turns (boundaries, speaker, text); “Scenes” for the annotation of the scene boundaries; “Shots” for the annotation of the recurring shots and shot boundaries; and “Interlocutors” for the annotation of the interacting speakers131313The annotation files are available online at: doi.org/10.6084/m9.figshare.3471839. 4 Text Recovering Procedure Due to copyright restrictions, the published annotation files do not reproduce the textual content of the speech turns. Instead, the textual content is encrypted in the public version of the Serial Speakers dataset, and we provide the users with a simple toolkit to recover the original text from their own subtitle files141414The toolkit is available online at: github.com/bostxavier/Serial-Speakers. Indeed, the overlap between the textual content of our dataset and the subtitle files is likely to be large: compared to the annotated text, subtitles may contain either insertions (formatting tags, sound effect captions, mentions of speaking characters when not present onscreen), or some deletions (sentence compression), but very few substitutions. Every word in the transcript, if not deleted, generally has the exact same form in the subtitles. As a consequence, the original word sequence can be recovered from the subtitles. Our text recovering algorithm first encrypts the tokens found in the subtitle files provided by the user, before matching the resulting sequence with the original encrypted token sequence. The general procedure we detail below is likely to be of some help to annotators of other movie datasets with similar copyrighted material. 4.1 Text Encryption For the encryption step, we used truncated hash functions because of the following desirable properties: deterministic, hash functions ensure that identical words are encrypted in the same way in the original text and in the subtitles; they do not reveal information about the original content, allowing the public version of our dataset to comply with the copyright restrictions; they are efficient enough to quickly process the thousands of word types contained in the subtitles; moreover, once truncated, hash functions result in collisions, able to prevent simple dictionary attacks. Indeed, the main requirement in our case is only to prevent collisions from occurring too close from each other: even if two different words were encrypted in the same way, they would unlikely be close enough to result in ambiguous subsequences. In the public version of our dataset, we compute the first three digits of the SHA-256 hash function of all of the tokens (including punctuation signs) and the exact same encryption scheme is applied to the subtitle files, as provided by the users, resulting in two encrypted token sequences for every episode of the three tv series. 4.2 Subtitle Alignment We then apply to the two encrypted token sequences the Python Difflib sequence matching algorithm151515docs.python.org/3/library/difflib.html, built upon the approach detailed in [Ratcliff and Metzener1988]. Once aligned with the encrypted subtitle sequence, the tokens of the dataset are decryted by retrieving from the subtitles the original words. The whole text recovering procedure is summarized on Fig. 10. The annotated dataset with clear text, materialized by the gray box (Box 1) on the figure, is not publicly available. Instead, in the public annotations, the text is encrypted (Box 2). In order to recover the text, the user has to provide h(is/er) own subtitle files (Box 3), which are encrypted by our tool in the same way as the original dataset text (Box 4); the resulting encrypted token sequence is matched with the corresponding token sequence of speech turns (red frame on the figure), before the text of the speech turns is recovered from the subtitle words (Box 5). 4.3 Experiments and Results In order to assess the text recovering procedure, we automatically recovered the textual content from external, publicly available subtitle files, and compared it to the annotated text. Table 4 reports in percentage for each of the three series the average error rates by episode, both computed at the word level (word error rate, denoted wer in the table) and at the sentence level (sentence error rate, denoted ser). In addition, we reported for every episode the average number of reference tokens (denoted # tokens), and the average number of insertions, deletions, and substitutions in the reference word sequence (respectively denoted Ins, Del, Sub). Because of possibly inconsistent punctuation conventions between the annotated and subtitle text, we systematically removed the punctuation signs from both sequences before computing the error rates. As can be seen, the average error rates remain remarkably low: the word error rate amounts to less than 1% in average. The sentence error rate also remains quite low: about 1% for got and hoc, and a bit higher (4.6%) for bb. As can be seen in the right part of the table, deletions are responsible for most of the errors, especially in bb: as noted in Subsection 3.1, we restored the words missing in the subtitles when annotating the textual content of the speech turns. Such missing words turn out to be relatively frequent in bb, which can in part explain the higher number of deletions ( 53 deleted words in average out of 3,700). Moreover, truncating the hash function to the first three digits does not hurt the performance of the text recovering procedure, while preventing simple dictionary attacks: the exact same error rates (not reported in the table) are obtained when keeping the full hash (64 hexadecimal digits). In order to allow the user to quickly inspect and edit the differences between the annotated text and the subtitles, our tool inserts in the recovered dataset an empty tag <> at the location of deleted reference tokens. Similarly, we signal every substituted token with an enclosing tag (e.g. <Why>). As will be seen when using the toolkit, most of the differences come from different punctuation/quotation conventions between the annotation and subtitle files, and rarely impact the vocabulary or the semantics. The whole recovering process turns out to be fast: 8.3 seconds for got (73 episodes) on a personal laptop (Intel Xeon-E3-v5 cpu); 6.73 for bb (62 episodes); 4.41 for hoc (26 episodes). We tried to keep the toolkit as simple as possible, with a single text recovering Python script with few dependencies. 5 Conclusion and Perspectives In this work, we described Serial Speakers, a dataset of 161 annotated episodes from three popular tv serials, Breaking Bad (62 annotated episodes), Game of Thrones (73), and House of Cards (26). Serial Speakers is suitable for addressing both high level multimedia retrieval tasks in real world scenarios, and lower level speech processing tasks in challenging conditions. The boundaries, speaker and textual content of every speech turn, along with all scene boundaries, have been manually annotated for the whole set of episodes; the shot boundaries and recurring shots for the first season of each of the three series; and the interacting speakers for a subset of 10 episodes. We also detailed the simple text recovering tool we made available to the users, potentially helpful to annotators of other datasets facing similar copyright issues. As future work, we will first consider including the face tracks/identities provided for the first season of got in [Tapaswi et al.2015a], but these face tracks, automatically generated, would need manual checking before publication. Furthermore, we plan to investigate more flexible text encryption schemes: due to the uniqueness property, hash functions, even truncated, are not tolerant to spelling/ocr errors in the subtitles. Though the correct word is generally recovered from the surrounding tokens, it would be worth investigating encryption functions that would preserve the similarity between simple variations of the same token. This work was partially supported by the Research Federation Agorantic FR 3621, Avignon University. 7 Bibliographical References - [Barthélemy et al.2005] Barthélemy, M., Barrat, A., Pastor-Satorras, R., and Vespignani, A. (2005). Characterization and modeling of weighted networks. Physica A, 346(1-2):34–43. [Bäuml et al.2013] Bäuml, M., Tapaswi, M., and Stiefelhagen, R. Semi-supervised learning with constraints for person identification in multimedia data. IEEE Conference on Computer Vision and Pattern Recognition, pages 3602–3609. - [Bäuml et al.2014] Bäuml, M., Tapaswi, M., and Stiefelhagen, R. (2014). A time pooled track kernel for person identification. In 11th IEEE International Conference on Advanced Video and Signal Based Surveillance, pages 7–12. - [Bost and Linares2014] Bost, X. and Linares, G. (2014). Constrained speaker diarization of tv series based on visual patterns. In IEEE Spoken Language Technology Workshop, pages 390–395. - [Bost et al.2015] Bost, X., Linarès, G., and Gueye, S. (2015). Audiovisual speaker diarization of tv series. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 4799–4803. IEEE. - [Bost et al.2019] Bost, X., Gueye, S., Labatut, V., Larson, M., Linarès, G., Malinas, D., and Roth, R. (2019). Remembering winter was coming. Multimedia Tools and Applications, 78(24):35373–35399, Dec. - [Bost2016] Bost, X. (2016). A storytelling machine? Automatic video summarization: the case of TV series. Ph.D. thesis. [Bredin and Gelly2016] Bredin, H. and Gelly, G. Improving speaker diarization of tv series using talking-face detection and clustering.In 24th ACM international conference on Multimedia, pages 157–161. - [Bredin2012] Bredin, H. (2012). Segmentation of tv shows into scenes using speaker diarization and speech recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing, pages 2377–2380. - [Clauset et al.2009] Clauset, A., Shalizi, C. R., and Newman, M. E. J. (2009). Power-law distributions in empirical data. SIAM Review, 51(4):661–703. - [Clément et al.2011] Clément, P., Bazillon, T., and Fredouille, C. (2011). Speaker diarization of heterogeneous web video files: A preliminary study. In IEEE International Conference on Acoustics, Speech and Signal Processing, pages 4432–4435. - [Ercolessi et al.2011] Ercolessi, P., Bredin, H., Sénac, C., and Joly, P. (2011). Segmenting tv series into scenes using speaker diarization. In Workshop on Image Analysis for Multimedia Interactive Services, pages 13–15. - [Ercolessi et al.2012a] Ercolessi, P., Bredin, H., and Sénac, C. (2012a). Stoviz: story visualization of tv series. In 20th ACM international conference on Multimedia, pages 1329–1330. - [Ercolessi et al.2012b] Ercolessi, P., Sénac, C., and Bredin, H. (2012b). Toward plot de-interlacing in tv series using scenes clustering. In 10th International Workshop on Content-Based Multimedia Indexing, pages 1–6. - [Everingham et al.2006] Everingham, M., Sivic, J., and Zisserman, A. (2006). Hello! my name is… buffy”–automatic naming of characters in tv video. In BMVC, volume 2, page 6. - [Friedland et al.2009] Friedland, G., Gottlieb, L., and Janin, A. (2009). Using artistic markers and speaker identification for narrative-theme navigation of seinfeld episodes. In 11th IEEE International Symposium on Multimedia, pages 511–516. - [Ghaleb et al.2015] Ghaleb, E., Tapaswi, M., Al-Halah, Z., Ekenel, H. K., and Stiefelhagen, R. (2015). Accio: A Data Set for Face Track Retrieval in Movies Across Age. In ACM International Conference on Multimedia Retrieval. - [Hanjalic et al.1999] Hanjalic, A., Lagendijk, R. L., and Biemond, J. (1999). Automated high-level movie segmentation for advanced video-retrieval systems. IEEE Transactions on Circuits and Systems for Video Technology, 9(4):580–588. - [Koprinska and Carrato2001] Koprinska, I. and Carrato, S. (2001). Temporal video segmentation: A survey. Signal processing: Image communication, 16(5):477–500. - [Labatut and Bost2019] Labatut, V. and Bost, X. (2019). Extraction and analysis of fictional character networks: A survey. ACM Computing Surveys, 52(5):89. - [Li and Chen2003] Li, C. and Chen, G. (2003). Network connection strengths: Another power-law? arXiv, cond-mat.dis-nn:0311333. - [McAuliffe et al.2017] McAuliffe, M., Socolof, M., Mihuc, S., Wagner, M., and Sonderegger, M. (2017). Montreal forced aligner: Trainable text-speech alignment using kaldi. In Interspeech, pages 498–502. - [McCarthy and Jarvis2010] McCarthy, P. M. and Jarvis, S. (2010). Mtld, vocd-d, and hd-d: A validation study of sophisticated approaches to lexical diversity assessment. Behavior research methods, 42(2):381–392. - [Ratcliff and Metzener1988] Ratcliff, J. W. and Metzener, D. E. (1988). Pattern-matching-the gestalt approach. Dr Dobbs Journal, 13(7):46. - [Roy et al.2014] Roy, A., Guinaudeau, C., Bredin, H., and Barras, C. (2014). Tvd: a reproducible and multiply aligned tv series dataset. In 9th International Conference on Language Resources and Evaluation, page 418–425. - [Tapaswi et al.2012] Tapaswi, M., Bäuml, M., and Stiefelhagen, R. (2012). “Knock! Knock! Who is it?” Probabilistic Person Identification in TV series. In IEEE Conference on Computer Vision and Pattern Recognition. - [Tapaswi et al.2014a] Tapaswi, M., Bäuml, M., and Stiefelhagen, R. (2014a). Story-based Video Retrieval in TV series using Plot Synopses. In ACM International Conference on Multimedia Retrieval. - [Tapaswi et al.2014b] Tapaswi, M., Bäuml, M., and Stiefelhagen, R. (2014b). StoryGraphs: Visualizing Character Interactions as a Timeline. In IEEE Conference on Computer Vision and Pattern Recognition. - [Tapaswi et al.2015a] Tapaswi, M., Bäuml, M., and Stiefelhagen, R. (2015a). Book2Movie: Aligning Video scenes with Book chapters. In IEEE Conference on Computer Vision and Pattern Recognition. - [Tapaswi et al.2015b] Tapaswi, M., Bäuml, M., and Stiefelhagen, R. (2015b). Improved Weak Labels using Contextual Cues for Person Identification in Videos. In IEEE International Conference on Automatic Face and Gesture Recognition. - [Tran et al.2011] Tran, V.-A., Le, V., Barras, C., and Lamel, L. (2011). Comparing multi-stage approaches for cross-show speaker diarization. - [Yeung et al.1998] Yeung, M., Yeo, B.-L., and Liu, B. (1998). Segmentation of video by clustering and graph analysis. Computer vision and image understanding, 71(1):94–109.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00101.warc.gz
CC-MAIN-2021-39
39,790
142
https://community.qlik.com/t5/group/printpage/board-id/qlik-scalability/message-id/862/print-single-message/true/page/1
code
ok, so since not even that simple execution works we can rule out it being any issue with the connect to app functionality. Connection gets refused. Do you have IIS running or any other application which would use the port 80? In that case try changing it to a different port, i.e. 81 in both QMC and scalability tool. If there is no such program, the issue is either with settings in the script or in the proxy. Does the server allow ntlm users? In that case, running the tool as a user with access to the app, change to a virtualproxyprefix which allows ntlm users (or removing it if it's the root) and change the connection type to ntlm. If the ntlm user can connect, the issue is either with the headname, the virtualproxyprefix name or token allocation/access rules. If also the ntlm user fails the problem is most likely with server name or token allocation/access rules.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00204.warc.gz
CC-MAIN-2020-24
877
4
https://www.coria-cfd.fr/index.php/SiTCom-B
code
SiTCom-B (Simulation of Turbulent Combustion with Billions of points) is a finite volume code that solves the unsteady compressible reacting Navier-Stokes equations system on cartesian meshes. It uses a structured formalism, which means that the data is organized in multidimensional arrays, according to the corresponding cell position in the physical space. It is mainly design to perform DNS and highly resolved LES on thousands of processors. It is a totally new version of the previous SiTCom code, written by P. Domingo and which has been the main tool over the past ten years to conduct numerical combustion research at CORIA. The main interest of this version is that it uses the full power of the fortran 90/95 language : pointers, dynamic allocation and most important, object oriented programming. The numerical code is managed by Pascale Domingo and Guillaume Ribert. - Finite volume discretization of Navier-Stokes equations - 4th and 2nd order central difference schemes - Runge-Kutta time-discretization (3rd and 4th order) - Full multispecies formulation - Realistic thermodynamics (CHEMKIN) - Realistic transport properties (Hirschfelder & Curtiss) - Multicomponent diffusion - Complex, tabulated and hybrid chemistry - Perfect gas, Peng-Robinson or SRK equation of state - NSCBC boundary treatment - Immersed Boundary Method - Lagrangian solver The code is actually a library of modules which implements objects. Examples of such modules/objects are: The main programs are simply built from this module library. All input files make an extensive use of keywords. This is possible thanks to a simple parser implemented in parser_m Nearly all objects can be put in chained lists which are very flexible and much easier to manipulate than arrays. Blocks are sets of cells assigned to a single processor. Each cell of the blocks can be described by a set of 3 indices (ix,iy,iz) that represent its position in the block. Another important concept is the block_bound structure: it is an object that contains two sets of three indices (one for the lower corner and one for the upper corner). It is used everywhere to perform loops on the blocks. It is possible to put numerical probes into the flow that will record the value of some variables at given positions for each time step. Both Reynolds and Favre averaging are implemented, for any variables. Second and Fourth order centered-interpolations are implemented in SiTCom-B. Moreover, convective fluxes can be computed in either convective, divergence or skew-symmetric form. Artificial Viscosity (based on Jameson formulation) is also available to stabilize the computation near steep gradients. The code has been thought to work on thousands of processors via the MPI protocol. Parallel communications and I/O have been optimized to achieve this goal. When running on thousands of processors, I/O may be a major bottleneck. I/O in SiTCom-B have been designed to overcome this limitation. Moreover, the HDF format is used to store meshes, solutions, ... which makes it easy to share on various platforms and people. A few pictures of SiTCom-B computations are available in this gallery.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474641.34/warc/CC-MAIN-20240225171204-20240225201204-00307.warc.gz
CC-MAIN-2024-10
3,154
30
https://forums.macrumors.com/threads/mach-desktop-anyone-know-where-to-get-hd-quicktime-videos.1139655/
code
Hey all, Just got Mach desktop from the app store and it has a lot of potential. However, the videos supplied with the software aren't high enough resolution, so they look pixelated on my 27" at native. They look great on my MBP however. I'm currently running a snippet from TRON in 1080p as the background, and it does look amazing (The light bike scene) Does anyone else have Mach Desktop - and have you created any decent looping videos for the background? or do you know where to get some? Thanks!
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823516.50/warc/CC-MAIN-20181210233803-20181211015303-00185.warc.gz
CC-MAIN-2018-51
501
1
https://www.ibm.com/developerworks/community/forums/html/topic?id=9abb845f-556d-4dc7-84b2-f57942a3116f&ps=25
code
I am stuck with a supposedly easy problem: need to recurse over nested blocks. does have somebody an example for it? so far I haven't found any clue. I have this structure in rhapsody: root->Package->Object->Object->Object->Object->... (quite easy) I want to iterate over each Object and get for example its name and description. I am attaching my current template as screenshot. it is only getting the objects at first level... Projects/Packages/Package/Objects/Object/NestedElements/ModelElement(Object) is not being recognized what is wrong there? thanks for any help in advance.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948064.80/warc/CC-MAIN-20180426031603-20180426051603-00068.warc.gz
CC-MAIN-2018-17
582
9
https://reverseengineering.stackexchange.com/questions/25848/get-a-networkx-graph-from-the-the-function-call-graph-of-the-file-using-idapytho
code
Is there any easy way to get a function call graph of a binary program using IDApython then convert it to a networkx graph other than going through every function and constructing the call-graph ourselves? Basically i want to have a call graph that i can tell which nodes are library calls and which are locals, and not including functions that are called by libraries ( so i dont go deep into nested library functions calling each other) i tried gen_simple_call_chart() but there are two big problems : there is no difference between library nodes and local nodes in the generated DOT file (no color or anything) CHART_IGNORE_LIB_FROM doesnt work, i dont want to include nodes that are called by library calls :( For example all the nodes are black no matter library or local : "205" [ label = "sub_40AF20", pencolor = black ]; "206" [ label = "ShellExecuteW", pencolor = black ];
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817206.54/warc/CC-MAIN-20240418124808-20240418154808-00691.warc.gz
CC-MAIN-2024-18
881
8
https://forums.naturalcapitalproject.org/index.php?p=/discussion/1531/invest-habitat-risk-assessment-model
code
This forum is shutting down! Please post new discussions at community.naturalcapitalproject.org InVEST Habitat Risk Assessment Model Hi, I was working with the sample data for InVEST Habitat Risk Assessment model. However, I am facing an error in the final output even though I have input the correct data for data quality as well as the weight as per the sample data that has been provided in http://data.naturalcapitalproject.org/invest-data/. I am attaching a jpeg of the error as well as the log file. It will be great if someone can help me out. Cheers :-)
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256163.40/warc/CC-MAIN-20190520222102-20190521004102-00008.warc.gz
CC-MAIN-2019-22
561
5
https://community.spiceworks.com/topic/898224-distrobution-lists-exchange-2010
code
I have a Distribution List "Board of Directors" I would like this DL to only be visible to a group of five users. As it stands right now in the security list "Authenticated Users" has READ access so everyone can see the DL. When I remove "Authenticated Users" from the access list my group of 5 users are not able to see it also. My issue is that the email addresses within this DL are the private email address of the board members and I want to ensure all users can not see them. Any help will be much appreciated. You'll have to create a new Address Book Policy and assign the users to that book. If you are running SP2 on Exchange you can use this site to help.
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689192.26/warc/CC-MAIN-20170922202048-20170922222048-00425.warc.gz
CC-MAIN-2017-39
665
4
https://www.yangondirectory.com/job/education,food-restaurant,food-restaurant,advertising,admin-human-resources,education/pabedan
code
on 22 July, 2020 - Modernize and improve our HRIS product. - Interacting with all layers in a complete software system (SQL, APIs, UI) as a full-stack developer. - Firm grasp on SQL as much of the business logic for company's application is implemented in stored procedures. - Work with like-minded teammates that are passionate about the roles that play in building a strong product with users. - 4+ years of .NET (VB/C#) work experience - 4+ years of SQL work experience - Consider yourself an expert in WinForms - Comfort with git repos, branches, and pull requests - Comfortable being an independent contributor - Broad experience with .NET Framework 4.0 or above including WCF, & WF - Design and develop front end Microsoft ASP .NET web logic - Eperience in Developing HRIS/Payroll, Mobile app development (iOS or Android) or related projects is preferable. ... Read full article
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00285.warc.gz
CC-MAIN-2021-21
884
14
https://simple-b24.ru/guide.php?lang=en
code
Sorry, but documentation is currently available in Russian only. But you can check auto-translated version. The bot is being developed by an employee of the <grafista> web-studio. The bot has its own group: @SimpleB24 - there you can contact us, ask questions, make suggestions. And the channel: @SimpleB24_channel - news related to the bot (release of new versions, planned maintenance, etc.).
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100399.81/warc/CC-MAIN-20231202105028-20231202135028-00464.warc.gz
CC-MAIN-2023-50
394
5