url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
https://www.chrismendlatech.com/2020/11/the-importance-of-keeping-rails-updated/ | code | Last Updated on November 20, 2020 by Christopher G Mendla
During my interviews in my recent job search, I’ve seen an alarming number of companies running on Rails 3.x and 4.x. Unless serious work arounds have been applied, these versions have multiple vulnerabilities.
Any software goes through a continuous process of improvement. New features and functions are added. More importantly, security issues are addressed. Ruby on Rails applications have four major areas where they can be exploited.
As of November 2020, Rails versions prior to 220.127.116.11 have an actionview vulnerability that must be addressed
For this post, we will concentrate on vulnerabilities in Rails.
As of November 2020, Rails should be at 18.104.22.168
I just spent a couple of days updating my rails profile site to 22.214.171.124. GitHub’s dependabot was showing a notice that there were vulnerabilities in my code.
Working through the notices showed that there was a vulnerability in actionview for Rails versions prior to 126.96.36.199. The notice did mention possible work arounds. However, the cleanest way to fix the situation was to upgrade to 188.8.131.52. That turned out to be a bit of work as the upgrade to Rails 5.2 is a little involved.
An alarming number of companies are still running Rails 3.x or 4.x. That almost certainly means there are vulnerabilities in their application.
As of November 2020, if you look at the Wiki for Rails, you will see that versions of Rails prior to version 5 are unsupported. That means that any vulnerabilities in those versions are not being fixed. That is a hacker’s dream. Another good resource is cvedetails which will show all the vulnerabilities for Rails.
There are a number of possible reasons. Doing ‘mundane’ update work just isn’t sexy. In a previous position, upper management was critical of our team because we weren’t “Putting out enough new features”. The team was an Operations and Maintenance team. Each of us had four production apps to maintain as well as doing bug fixes and some new development.
The accolades went to the teams that were developing new applications, not to the O&M team that was keeping the applications secure.
In a previous position, members of our team were criticized for “Spending too much time on updates and maintenance” even though the constant integration system would preclude builds if the apps were not updated. Management wanted to report new features.
Maintaining and upgrading applications takes time, money and resources. For a company that is on Rails 3.x, the prospect of moving toward the current version, 6.x at the time of this post, is daunting. Quite often, the development teams are in a Dilbert like world where the pointy haired bosses don’t understand the necessity for keeping current.
A major version upgrade of Rails can be time consuming and poses risk. In many cases, supporting applications such as Postgres or the Ruby version might need to be updated.
A number of Rails updates I have done have either broken Rspec tests or required changes to the gems and applications that support Rspec.
If you are using GitHub, the ‘dependabot’ feature will alert you if your master branch has vulnerabilities.
The team should have a plan to constantly be upgrading. Letting the version stagnate for a year or two because you didn’t have the resources or because working on updates didn’t put the developers on a promotion path will cause problems later.
It isn’t necessary to always be on the latest version. There is risk to being an early adopter. Set a target perhaps for being no more than two minor versions behind the latest stable version.
In general, updating to the next minor version is usually straightforward. There are exceptions such as updating to Rails 5.2
Updating to major versions usually requires more work and poses more risk.
Updating Rails isn’t sexy or career enhancing but you need to keep ahead of any vulnerabilities. Establish a plan and be sure that efforts to update the code base are rewarded and not punished.
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment. | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488289268.76/warc/CC-MAIN-20210621181810-20210621211810-00408.warc.gz | CC-MAIN-2021-25 | 4,215 | 24 |
https://github.com/ponzu-cms/ponzu/releases/tag/v0.11.0 | code | / ponzu Public
olliephillips released this
30 Sep 12:52
· 9 commits to master since this release
This release includes new features.
Snap support. Install Ponzu CMS as a snap. See #313 & #307.
Editable slug. Ponzu users can now maintain the slug as part of their item maintenance. See #309.
AfterAPIResponsehooks added offering opportunities to modify API response. See #305 | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652569.73/warc/CC-MAIN-20230606114156-20230606144156-00309.warc.gz | CC-MAIN-2023-23 | 375 | 8 |
https://www.freelancer.hk/job-search/sitefreelancercom-will-website-wordpress/ | code | I need a video about 90 seconds long for the promotion of my book, I will provide materials such as psd files and Bgm . your job will be simply make some parts of the photo( cloth, hair, water stars) move or twinkle .I had already a idea in mind , I need some one who can communicate with me ,and be capable of using the software like adobe flash,etc
...submission and its qualification of our requirements below, EACH CONTESTANT MAY ONLY SUBMIT 3 SELECTIONS. IF YOU SUBMIT MORE YOUR ENTIRE SET WILL BE DELETED AND YOU WILL BE REMOVED FROM THE CONTEST. [登录来查看链接] will be our new home. You can see [登录来查看链接] we are preparing our consumer market purchasing wellness and nutritional supplements online
I need you to write some content for a website.
I need a logo altered
When you bid, mention: "Low Rocks". Only new freelancers I have a very simple project available. You just have to make calls and fill sheet...You need to have a smart phone for the project. We are bit tight on budget so would prefer only: Only Indians College students or School students (Currently on vacations). Will hire 10+ people for the project.
We are looking to build an independent chatbot for our service where customer can make a booking .. based on the chat offered by the bot ..show me what u got.
...database built in French That will be hosted online and shared by multiple data enrty staff that will be enrering elections results simultaneously as they become available. I have the domain name and hosting plan. Database will track results for 5 candidates in 4000 polling stations, divided into 13 regions. - Dashboard will show summary for each candidate
- My wordpress website reaches Google PageSpeed Insights score of 75 on average for mobile and 98 for desktop; you will have to bring my website to 95+ for mobile and 100 for desktop without using AMP (we are hosted by one of the fastest hostings in the world). Take into account that caching is already active and pictures have a good compression level
I need to run a DVD menu style 2 pager that will be on a USB STick, so when someone opens the index page, it loads up in their browser, and has links to Videos which are also on the USB Stick. I will add in all Menu Backgrounds & Videos afterwards, so the links to them should be generic (I can provide dummy files). Here is an example showing what
When you bid, mention: "Low Rocks" and tell if you are a student or not I have a very simple project available. You just have to make call...You need to have a smart phone for the project. We are bit tight on budget so would prefer only: Only Indians College students or School students (Currently on vacations). Will hire 10+ people for the project.
I have a very simple project available. You just have to make calls and fill sheet accordingly. When you bid, mention: "Low Rocks" We a...sheet accordingly. When you bid, mention: "Low Rocks" We are bit tight on budget so would prefer only: Only Indians College students or School students (Currently on vacations). Will hire 10+ people for the project.
...script which will generate static XML file from Vendor product API, I need only all product data. 2. Script must be executed using PHP-CLI in browser window with start and stop generate button and maybe show some progress and finish information. Here is vendor documentation (you will need org number, login and pass for gateway - will give you) https://bit
I need a hypnoses recording session, spells, binaurals, subliminals, illustrations of my new name at it's best which i'm trying to make fun and cool like never before.
I need an Android app. I would like it designed and built.
The website's name is [登录来查看链接] The correction are stated in the uploaded files. The original website was done in PHP, Codeigniter and MYSQL database. The functionality is imperative. As you can see we don't have a shopping cart where you can just go and buy the product but rather a site that can provide enough product information so you can "order
I need cap design
Job: sealed structural documents per city permit requirements Required: active Florida structural engineer license (dbpr), foundation design, reinforcement design, Provided: architectural background drawings in DWG or Revit, survey, use/loads, general floor and railing details for review, Mytee anchors for foundation/stack points
need php developer for crm project development cost will be INR-10000 to 18000 only candidate can beed who are ready work in above budget.
I need a logo designed.
About This Gig Did you create new website or need to improve your website SEO Score? Here is the Best White Hat Service in 2019, Freelancer 80 High PR Foundation, SEO Backlinks, Manual Linkbuilding Service 2019! In this gigs I will give you 80 High Quality SEO Backlinks **** See all the Backlinks Package Below **** BASIC PACKAGE : Build 80 S
About This Gig Did you create new website or need to improve your website SEO Score? Here is the Best White Hat Service in 2019, Fiverr! 80 High PR Foundation, SEO Backlinks, Manual Linkbuilding Service 2019! In this gigs I will give you 80 High Quality SEO Backlinks **** See all the Backlinks Package Below **** BASIC PACKAGE : Build 80 S.E.O
I need some changes to an existing website. I need you to design and build my online store.
Hello! I have a wireframe here of the site I need created. [登录来查看链接] I created it in Adobe XD. I have that file for you as well. I can create all icons and replace areas for photos, I just need the site built. I do not know what theme to use to create this. I need your assistance with that choice. I need the site as soon as possible. I was supposed to have ti done tomorrow, but dragged my fe...
I need an Android app. I would like it designed and built.
Hello, one of my companies is helping people to get rid of the bad loans with high interest rate and illegal loan conditions. I need ethical hacker who will shutdown, redirect or in any other way disable websites of such fraudulent companies/individuals why take profit from people who are in bad financial situation. Long term job, minimal duration
I am looking for a Male Proxy who is an expert in Salesforce . Once finalized, I will be sharing the job description along with other required details. Please find below my requirements Gender: Male Technology/ Skills required: Salesforce/ APEX, Agile methodologies, Scrum , CRM, SFDC development, configuration, and administration including declarative
Hello, I need an Excel function that will pull stock market data off the web and into the cell for me. I am not a trader, so I don't need anything that will update every minute. I need to use this to compare stocks, ETFs, and mutual funds against one another in building a portfolio. Ideally, I would like a new function where I can type in a cell
I need you to fill in a spreadsheet with data.
...meal delivery service website. I have samples of the ideas I'd like to implement from reference sites. (The website is basically Completed it just needs a redesign without losing functionalities) This includes all of the similar functionalities as reference page (that I will provide) it has to look very similar. the website is already completed I
I want web service which will fetch the data from database as per parameter given to that service Only One parameter and that service need to use in Entity framework instead of stored procedure
i require a few simple changes to my website - [登录来查看链接] 1. I require airport prices to be updated on the following pages (mobile and desktop version) : [登录来查看链接] [登录来查看链接] 2. Card Logos to be updated (remove MX). 3. Remove PayPal Option on booking form : http://www
I need a logo designed for a flyer
Dear freelancer, I have list of whatsapp numbers (Indian a...2. I am happy to pay $5-10 for each number 3. put 5+5+6 Answer in top of your proposal as you have read it thoroughly SAMPLE SCRAPING IS MUST BEFORE AWARDING THE JOB (I will check the sample messages with the original file i have) TO ENSURE YOUR SKILL ON THE PORTAL Best regards, David
I need a translation work. I have experience of translating letters from Hindi to English.
Dear freelancer, I have list which contains 67 indian whatsapp number. I need all those whatsapp conversation archive (pdf, txt). Myself also a senior web developer as I don't have much time to create a new tool. Note: 1. Describe how you're gonna get those messages 2. I'm providing 3 numbers you have to submit corresponding messages (To ensure hiring competent employer) 3. Write ...
We need a logo design for our company, "The IV Solution" I'd like to see with and without "Vitamin & Hydration Therapy" as part of the logo. Please no more submissions with leafs in the logo. This is a health and wellness company. We do IV vitamin therapy. It is also a med spa. Ideas to incorporate: Water medical cross droplet, medical cross WITH a droplet? IV ...
I need a new website. I need you to design and build a website for my small business.
Want to play games with some friends but there offline I will play with you on overwatch and Rainbow six siege
Hi, I live in Australia, I need someone who knows about poetry so they can edit my manuscript. Also someone who knows how to format the book as per kdp guidelines. Prefer someone in Australia and has knowledge of poetry. Would prefer someone who is already an author & has published a paperback on amazon.
We're looking for a business card design as a kind of promo card for adult models. Please use this logo [登录来查看链接] from [登录来查看链接] And this image: [登录来查看链接] from [登录来查看链接] Maybe we could go with the photo in a circle idea like twitter does, or maybe you have a different layout in mind The card should be MyPremium branded as the main logo, and include the the following URL on the business card...
the business is going to be a clothing brand which will be quiet high end clothing and mostly tracksuits and so on but higher class. would like a simple design however something thats not the same as anything else and a print that can be repeated again and again and will still look nice.
I need a plugin created that will input fields from one plugin (a tee designer) to another plugins database (a customers store within my site) for logged in customers.
Hello, I have a Wordpress site that is almost finished. I need a developer to update it with content that I will provide. Site address: [登录来查看链接] Thanks, Roman.
...embedded-mcu-dsp/786?k=TI%20MSP430F2013 First TI USB Stick for Display operation only. Second TI USB Stick for Sensors you will have to develop the code for. You will write the C-Code only Based on my algorithms - you will not be involved in any changes to Hardware configuration or to algorithms modifications!! FRAM with dual I/O port (using PCA9541A:
The deliverable is a voice file of the scripts provided to match the videos. If you have experience using videos and embedded the audio, that is a plus. I need someone who is clear with pronunciation and has an inviting voice. | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998369.29/warc/CC-MAIN-20190617022938-20190617044938-00190.warc.gz | CC-MAIN-2019-26 | 11,148 | 45 |
http://www.501vip.com/webboard_712748_15986_th?lang=th | code | Chicago specialty air max alignment financire air max 90 l'ordre dom l'entreprise mens nike air max contribue aux points of views favorables. Are mont blanc pens generally nike air max rcente air max 95 embed du divienvironnant lesnkeep nike boots going mont blanc marque guidelines dixime hausse nike air max 90 annuelle conscutive womens nike air max et jordans for sale Nike cheap under armour racheter louboutin outlet ses. jordan 11 Chicago adidas stan smith Chine reste a par ailleurs nike free 5.0 une opportunit substantielle dvelopper.
Community. nike free run conclude check Pluto: nike basketball shoes On nike air max 95 saturday, Pluto appears to be jordan shoes every"Amount of nike free 5.0 resistance" Improving nike air max 2017 in sundown nike shoes with plain retro jordans virtually the whole night. To nike store bad this time Pluto louboutin outlet is cheap jordans far air max 90 off nike shoes for women so jordans for sale darkish that mont blanc outlet it's air max 1 uncommon maybe cheap nike air max truly to jordan 13 build air max a seasoned christian louboutin onlooker with nike shoes a nike shoes for men fairly immense jordan 5 telescope. nike factory store Despite nike free the fact jordan 11 that next air max 1 wednesday this nike air max perspectives spacecraft nike air max 90 goes over nike clearance in jordan shoes the jordan 13 vicinity louboutin shoes of jordan 6 Pluto nike huarache and cheap jordans will nike store start jordan 13 out nike free out nike huarache e-mailing under armour sale come again nike shoes for women again all of nike outlet my christian louboutin outlet initial seal nike air max images along nike basketball shoes with unexplainable christian louboutin shoes populace.
As nike free 5.0 long nike sneakers i should mens nike air max have determine, nike clearance Choosing excess fat nike free run he or my nike shoes wife air max applied under armour outlet begun jordans for cheap the whole in jordans for women their adidas originals own caffeinated nike outlet drinks. under armour sale So when shagged up mens nike air max currently, Where nike huarache right nike free several womens nike air max your own christian louboutin sale woman is jordans for cheap at unknown tattoo nike outlet entirely directly jordan 11 referring christian louboutin sale to jordans for women get nike air max identity system selection. nike huarache I included a child comrade iwould found disappeared nike shoes a ton of a jordan 6 few nike air max 90 pounds. nike air max 95 Acceptable, under armour shoes Convinced, He womens nike air max boosted air jordan they, cheap under armour Yet somehow nike outlet they didn womens nike air max understand we can nike boots melt away[Assistance glass nike boots pitcher] Jer Christiansen all climates nike sneakers and seasons. the, adidas outlet The irrational possibly will nike air max 2017 pin retro jordans the consequence on air max Magowan braggadocio for air max 1 even though the the big boys nike cleats were nike store found to be unusually air max exercise related accidents vulnerable nike cleats such year. nike air max 2017 Throughout July, nike factory store They shown an outfield nike free 5.0 glancing nike store back up people nike outlet jeff Goodwin jordans for cheap and also Shawon Dunston beside infielder Ramon Martinez, Subsequently, soon rookies nike sneakers craig provides, Tsuyoshi jordans for sale Shinjo and nike air max 90 therefore new jordans Reggie air max 1 Severy retro jordans biters experienced jordan 11 hamstring muscle nike shoes for women difficulties.
E nike sneakers Sacajawea nike cleats simple: Subsequent to study new jordans that the under armour shoes faculty jordans for girls bioswale appears air max to be scarcely passing nike clearance check, Saving christian louboutin sale money jordans for sale number caused air max the calgary general open nike shoes dojos surface and repair employees which christian louboutin will nike outlet friut indigenous, Drought understanding facilities. The nike free run c's cheap jordan shoes headed nike outlet taking christian louboutin shoes away nike outlet nonnative flowers nike shoes for women or nike air max 90 vegetables to nike outlet cut nike store back applying water jordan 13 and highlighted water nike shoes salvaging pass air max 90 washer means. jordans for girls The exact result: cheap jordans The louboutin shoes college damaged nike air max 90 water utilization via nike free run 2.2 new jordans gallons air max 1 in nike cleats each university scholar nike factory store month, jordans for girls
Soft cheap nike air max drink christian louboutin outlet disburses under armour discount a nike air max 90 pleasant nike clearance results towards cheap jordan shoes 3.3%. It cheap jordan shoes is pretty loved montblanc pens by a ahead mont blanc pen P/E cheap jordans with nike air max 2017 12.37 nike store rrncluding a nike store PEG in nike basketball shoes 1.62. It nike clearance easily has nike free 5.0 far monetary($27.32 billion mens nike air max dollars) Then cheap nike air max total cashflow nike shoes for men using nike boots $3.34 million. under armour discount Have nike shoes for men proven to nike air max 95 be arrogant to certainly christian louboutin shoes be a adidas superstar long nike free run term domestic date and jordan 12 to spend time nike sneakers playing such a huge role air max 95 in accommodating Pou nike shoes Sheng financial financial expansion nike cleats as nike free run both a save in addition jordan 6 to nike outlet being a choreographer nike air max of a top quality under armour womens shoes stuff, Alleges adidas store bob Groves, Director and even chief executive cheap nike air max officer nike shoes of nike air max Centric nike basketball shoes software air max 95 package program. Sheng cheap nike air max is under armour womens shoes definitely the best air max 95 19th prospects in nike shoes for men your air max 90 community nike air max when mens nike air max we nike basketball shoes are nike store pleased to nike shoes remain starting. nike factory store The corporate has been jordans for women wishing to buy more than 20 under armour outlet months sales air max both christian louboutin outlet athletic operation nike shoes as jordan shoes well habits programs; nike outlet And nike boots shows up in nike free run the Hong louboutin shoes Kong nike air max 2017 stock market; And convey air max 95 more than 20,000 nike huarache sales agents; nike factory store And it nike shoes for men has much more than 6,000 difficulties about nike store sales nike air max events jordan shoes as a nike store result of nike shoes for women new supply nike shoes pipes over japan, nike store Taiwan air max 90 and womens nike air max then nike air max 90 Hong Kong. | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00064.warc.gz | CC-MAIN-2020-40 | 6,832 | 5 |
https://ctan.org/ctan-ann/pkg/studenthandouts | code | Announcements for Student Handouts
Student Handouts – Management and styling of student handout projects
This package can be used to generate a single master document that contains a set of individual student handouts. The package has two main functions.
First, it provides a simple framework for organizing handout source code, and supplies a set of import management tools for selectively importing a subset of the handouts into the master document. Selective import is convenient when compilation of all of the handouts is unnecessary, for example when working on a new handout.
As a secondary feature, the package defines a basic visual style for handouts. This style can be easily changed.
|2017 James Fennell | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817002.2/warc/CC-MAIN-20240415142720-20240415172720-00478.warc.gz | CC-MAIN-2024-18 | 716 | 6 |
https://hydrogenaud.io/index.php/topic,64524.0.html | code | A more far fetched idea of mine was to add an output to the tape duplicator that could used as an input to the pc... iono, try and figure out where the signal is and then get the right impedances and stuff. iono how i'd do it though :-/
Furthermore, would we run into aliasing issues if we did manage to sample these cassettes with the tape duplicator my dad has? We'd be using a normal 44khz sampling rate too.
Hello all! Well my dad is trying to copy his collection of sermons and seminary lectures from cassette to pc. | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687281.63/warc/CC-MAIN-20170920123428-20170920143428-00281.warc.gz | CC-MAIN-2017-39 | 521 | 3 |
https://flutterfixes.com/mimicking-build_runner-serve-on-a-nodejs-server/ | code | With the expiration of Dartium that happened just a few days ago, I felt compelled to migrate from dart 1.24.3 to Dart2, even though it is still in dev.
I have although hit a few walls doing so, one of them being related to the architecture of my apps.
I run a nodeJs server, which also acts as a webserver with client side dart.
The problem that I experience with the new dart SDK is that in order for the .dart files to be read in Chrome, they must be served using
webdev serve or
Obviously, these 2 commands act as the file server, which is not what I want since I’m using a nodeJS server.
build_runner watch I think I am enabling the build and watch of the .dart files into .dart.js inside of the following directory :
I am also able to serve them from my nodeJS server. What remains is the package directory, I can’t seem to find where pub serves gets the following package files:
Does anyone know what build_runner serve does to include them?
There are 2 options for using a different server during development.
build_runner serveon a different port and proxy the requests to it from your other server. This has the benefit of delaying requests while a build is ongoing so you don’t get an inconsistent set of assets.
build_runner watch --output web:buildand use the created
build/directory to serve files from. This will include a
build/packagesdirectory that has these files in it.
Answered By – Nate Bosch
Answer Checked By – Terry (FlutterFixes Volunteer) | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500126.0/warc/CC-MAIN-20230204110651-20230204140651-00666.warc.gz | CC-MAIN-2023-06 | 1,475 | 16 |
https://akvo.org/blog/open-data-for-development-camp-2012/ | code | In a few weeks the Open Data for Development Camp (ODDC) 2012 events will take place in Nairobi and Amsterdam. They are bound to be great events, which will take the usage of Open Data to the next level.
In 2011 we organised the first Open Data for Development Camp in Amsterdam, bringing together a diverse crowd of policy-makers, development aid workers, researchers, journalists, ICT-staff and software developers in order to learn about the possibilities of Open Data for Development, share experiences and networks. Here you can read more about the 2011 event.
Following that event, the NaiLab ICT Incubation Centre in Nairobi called out to the organisations present in Amsterdam: You have the data, we need that data. Give us the data!
They suggested having the next event in Nairobi and offered to help with the organization. So this year we will take them up on that offer, in addition to holding a 2012 event in Amsterdam.
On Wednesday 27th and Thursday 28th June, we’ll be at the iLab at Strathmore University in Nairobi, to connect on-the-ground initiatives on open data and citizen engagement in development initiatives.
On Friday the 29th June, we’ll come together in Amsterdam with lots of Dutch organisations, to take stock of what is happening, and to engage in making the data available that African organisations and companies are asking for in Nairobi.
OpenData for Development Camp in Nairobi
The ODDC in Nairobi is part of The Kenya Open Data Pre-Incubator Program, a six-month experiment to help accelerate the availability for the public to make sense of data and to galvanize engagement around critical public issues.
The event in Nairobi will be a 2-day conference about open data and open international development. These terms might sound vague, so here’s a brief explanation:
Open data is a term that is used to describe data that is freely available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control.
Open international development takes into account open data, but also open cooperation. It’s the idea that organisations that work in the field of international development should work together to make tools and create efficiency.
In March 2011 the Obama Administration launched the Open Government Data initiative, which fits into our Open Data Development philosophy. One of the countries acting upon it was Kenya and in July 2011, with a lot of support from the World Bank, it launched The Kenya Open Data Portal. There was a clear message: for the people to hold us accountable.
Nearly a year after its launch, it seems like a good time to look at next steps. How does it influence people? So with this event we’ll take it a step further and explore how indeed the Open Government Data of Kenya, the Open Data of the World Bank and the IATI (International Aid Transparency Initiative) files impact the tech community in Kenya and, behind them, the active citizens.
The ODDC in Nairobi is organised by ICT Board Kenya, Kenya Open Data Initiative, Open for Change, World Bank, NaiLab, iLAB, Akvo, 1%CLUB, Hivos, the Ministry of Foreign Affairs of the Netherlands, and Development Gateway.
The event will offer a combination of keynote speakers, workshops, best practices, speed geeking, hack space, networking, exchange of knowledge and needs, sharing data sets, co-creation, open data visualisations, and inspiration.
Open Data for Development Camp in Amsterdam
The ODDC in Amsterdam will focus on explaining open data and open development to interested organisations and NGOs. It will elaborate on what IATI is, what is happening all over the world in the field of open data and ways in which opening up data can impact an organisation. There will be a connection to the Nairobi event via Skype interviews and presentations.
The ODDC in Amsterdam is organised by Open for Change, Partos, Akvo, 1%CLUB, IICD, and the Ministry of Foreign Affairs of the Netherlands.
It will take place in the AmLab in Amsterdam and will be a combination of keynote speakers, workshops, best practices, networking, exchange of knowledge and needs, open data visualisations, and inspiration.
Josje Spierings is a project assistant for Akvo. | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510749.37/warc/CC-MAIN-20181016093012-20181016114512-00001.warc.gz | CC-MAIN-2018-43 | 4,236 | 20 |
https://cpp.libhunt.com/libs/virtual-machines | code | Selected TagsClick on a tag to remove it
More TagsClick on a tag to add it and filter down
Virtual Machines libraries
Showing projects tagged as Virtual Machines
9.7 9.8 L3 CMicroPython - a lean and efficient Python implementation for microcontrollers and constrained systems
9.3 10.0 L2 COfficial QEMU mirror. Please see http://wiki.qemu.org/Contribute/SubmitAPatch for how to submit changes to QEMU. Pull Requests are ignored. Please only use release tarballs from the QEMU website.
8.6 6.3 L2 CUnicorn CPU emulator framework (ARM, AArch64, M68K, Mips, Sparc, X86)
7.5 0.0 L4 CTinyVM is a small, fast, lightweight virtual machine written in pure ANSI C.
5.5 1.6 L4 C"interesting" VM in C. Let's see how this goes.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest. | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358966.62/warc/CC-MAIN-20211130080511-20211130110511-00048.warc.gz | CC-MAIN-2021-49 | 845 | 11 |
https://ariawicaksa.medium.com/project-owl-a-post-mortem-144951873f18?source=post_internal_links---------6---------------------------- | code | Project Owl — a Post-Mortem
Each project is a unique story, and this project was definitely a unique story for our team at Lussa Teknologi. This project started as “hey, why don’t we try…” project (which led to quite a fuck-ups), and ended up producing several of our team’s best practices which is still relevant nowadays.
The project arrived in similar fashion with other projects: a software development firm score a deal with a big client, and outsource some of the development work to our team. In this story, we will define the actors as follow: Users as the Big Client (and their reps), and Client as the software development firm who outsourced the works to Lussa.
The project objectives went like this: the users had an ERP system created from scratch by the client. To enable more flexible access, the User wanted to create the mobile app version of the system. Lussa got the project to mobile-ify the ERP modules into an app.
We used a hybrid approach, one codebase (with small amount of platform specific modification) for Android and iOS platform. Also, in this project, we tried to experiment on several things:
- Assigned a dedicated QA Developer at the start of project
- Filled up some roles (Project Manager, Designer, and Developer) with freelancers, and keeping the internal team to Product Owner, Lead Developer, and QA Developer (spoiler alert: the combination didn’t work)
- Designed Continuous Deployment System for the development pipeline
This Post Mortem will be divided into three parts: What didn’t work, what worked, and key learning.
One disclaimer, this will be written from Project Manager view (mine). So there are minimal technical aspects and more on project management aspects.
What didn’t Work
- Not creating Proof of Concept before starting the project. Some of the most time-consuming key activities could be addressed earlier, such as the fact that the code from the customer had no documentation, the existence of unique methods to access internal data, and non-availability of publishing accounts (Google and Apple Developer Accounts).
- Not defining project output in the beginning due to short time for contract negotiation. The `specification` agreed on in the beginning of the project was more or less “convert this module as mobile app” which didn’t put into account the difference between mobile and desktop experience and technical constraints. This extended to other mistakes, which is creating contract before deciding on specifications.
- Not performing risk analysis in the beginning. We had no risk management plan if the external team members did not perform up to standards, if client requests large amount of detailed revision and changes, nor if project is delayed due to bottlenecks in the development process (can be from User’s side, Client’s side, even from our side).
- Not providing clear control mechanism for team members. We didn’t define parameters / key outputs nor setting standard for team members. We also delayed the termination of the team member with bad performance for far too long despite early red flags from said person.
- Not creating design guideline. This resulted in design inconsistency across modules created by multiple person. The designer also didn’t meet directly with the Users which further undermined the design process.
- Divide the workload into API and apps, or to simplify, “front end and back end”. This created bottleneck because front end developers can only work after API/back end developers finished the API.
- Not creating and enforcing daily reporting mechanism for team members. This resulted in team member not providing constant progress which made it difficult to monitor the schedule slip, especially since the QA team had to work on large amount of code at once due to last-minute work submission.
- Relying on Client’s specification instead of discussing directly with the Users. Fortunately, we learned from it and specification for third and fourth phase of the project (we divided the time frame into sprints) are discussed with client. Previous specification omitted many things, and did not cover hidden specifications which resulted in unclear work output.
What Worked Well
- Assigned a dedicated QA Developer at the start of the project. The lead QA was able to have whole understanding of the application and can act as main reference for other team members.
- Creating a continuous deployment system. This shorten the deployment process by a long shot and enable developers to focus more on coding instead of managing deployment process.
- Terminating non-performing team members (even though it was not done immediately). Not limited to freelancers, we should also consider to reassign internal team member who performed poorly to other task that better suit the person. But this should only be done if we provide them with clear targets and resources to finish their assignments.
- Address the specification mismatch directly with the user. Our customer provided second-hand information which didn’t convey all the user’s requirements well.
- Staying at Jakarta the night before morning meeting. This made the team more refreshed for the meeting and eliminated the risk of coming late for the meeting (which actually happened once with one of the team member overslept and missed the entire meeting).
- Don’t rush to deal the project. Never start a project without clear output and specification. At minimum, allocate time to perform Proof of Concept for the critical parts of the project. E.g. publishing, access of restricted resources, and external code review.
- Discuss specification with the User, not the Client. Especially if our user is knowledgeable and helpful, and our Client has multiple focus (e.g. multiple projects). These type of User must be engaged early because their involvement can greatly help the development process.
- Designer should have capacity for front end development. Designer should be required to provide mock-up, design assets (including ligature for icon pack and vector images), design guidelines, and UI components. This way they can provide maximum value and all pages/modules can have consistent design (Designer should create a design system).
- Always make written contract with freelancers. This way both parties can clearly define their rights and responsibilities. This also forces our team to create a detailed specification to monitor the freelancer’s work.
- Developers should be responsible for a single module. Instead of making them responsible for only API/front end, they should be responsible for delivering a whole module to reduce bottlenecks. This will only work if we define shared components (e.g UI components)before creating the modules.
- Consider to budget for accommodation if the customer / user tend to prefer morning meeting. Or cut the development price and add clause in the contract with our Client that we have right to request reimbursement of transportation and accommodation cost for every out of town meeting.
- Keep 2 versions of the final app, one use data in development environment, and one in production environment. This way, we can do bug fix on the development app, and the client & user can both check for bug fix while keep using the stable app online.
The project was delayed by 3 months, mostly because of bottlenecks, unclear specification, and non-performing team member. We spent 2 months in the beginning by being idle.
After we redesign the project management process and change the team composition, we get better at delivering progress. The problem still persists in bottleneck (because front end must wait for API to be finished), but overall we can better manage client expectations. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00025.warc.gz | CC-MAIN-2023-14 | 7,750 | 34 |
https://dogblog.com/pet-dog-saved-this-toddler-from-a-snake-attack/ | code | Little Bryson was playing in his Tennessee yard when a copperhead snake charged towards him. Shiloh the dog sprang into action and saved his buddy from the snake attack.
Shiloh is being called a hero after saving a toddler from a snake attack. The 18-month-old was playing in the front yard when a venomous copperhead slithered up to him. Shiloh quickly stepped in and saved little Bryson from the snake. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101282.74/warc/CC-MAIN-20231210060949-20231210090949-00346.warc.gz | CC-MAIN-2023-50 | 404 | 2 |
https://community.viofo.com/index.php?search/314633/ | code | I am a new owner of a git2p firmw installed is GIT2P_V1.1.4_1206
The problem I experience is:
Impossible to power on camera using remote control.
RC quick shoot is ON in camera setting (pairing RC was successful)
Can start and stop video recording with RC using video/power button on RC but... | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818835.29/warc/CC-MAIN-20240423223805-20240424013805-00687.warc.gz | CC-MAIN-2024-18 | 293 | 5 |
https://www.vexforum.com/t/programming-troubleshooting/86156 | code | Trying to help a team troubleshoot the beginning of their autonomous code. The robot is setup with drivetrain and a 1:2 input:output. The driver sprocket on the motor is 32t and the driven sprockets at the wheels are 16t.
The robot is supposed to turn 90° and drive forward, but it overshoots the turn and wiggles back and forth several times before proceeding.
Nice job on that robot.
Anyway, back to the question, I think that your problem is in your robot, not the code (thanks anyway for posting your code). When you have that speedy of a drivetrain, it can cause many inconsistencies, especially in programming.
My solution to that is slowing down the drivetrain so you will have much less wiggle room in your drivetrain. I think that wiggle is due to it trying to get to the angle, and it overshoots it, so it turns the other way, overshoots, and I think you get the point.
I am not 100% sure this will work, so just post if you are still having problems.
yea, as @FRC973 says, not many options when using blocks other than slowing down the turns or writing your own control loop for making the robot turn. Is the robot using a gyro to make the turns or just relying on the motor encoder ?
There is a gyro set up but the code block says “turn right for 90°”.
I had always presumed it uses the gyro for that 90° but now that you mention it, it may be a calculation based on wheel size and drivetrain ratio.
It is slowed down to 25% turn speed and that cleared it up. I wonder if “turn to heading” would behave differently
All of the gyro based turn code uses the same core P controller, so turn to heading would behave the same, If you were using C++, you could adjust the P controller constant, but there is no block to do that, the C++ API has far more functionality than blocks exposes. | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00563.warc.gz | CC-MAIN-2023-06 | 1,805 | 11 |
https://coinscrum.com/what-else-did-bitcoin-achieve-verifiable-computing | code | What (else) did Bitcoin achieve – verifiable computing
(part 1 in a series on the impact of the Satoshi’s invention. Part 2., Part 3.) There are several things that Bitcoin achieved – the money, and the invention of the blockchain being the obvious ones that people talk about. But it is in the process…
(part 1 in a series on the impact of the Satoshi’s invention. Part 2., Part 3.)
There are several things that Bitcoin achieved – the money, and the invention of the blockchain being the obvious ones that people talk about. But it is in the process of achieving a few more things. Isolating these factors out has been an interesting exercise, so I started writing them down a few months back, with a view to sharing them. Why? Folks who attended the coinscrum event last Tuesday will recognise at least one motivation, but there are more, including – why not?
Here’s one – Verifiable computing.
What are sometimes called smart contracts aren’t necessarily interesting because of their capability to do automation – we’ve always had that, it’s called computing. Or their capability to do financial transactions, leading up to contracts even, as per the name. Again, that’s always been there in various and many acronyms and ventures. From 1994, Nick Szabo wrote:
Digital cash protocols[2,3] are fine examples of smart contracts. They enable online payment while honoring the characteristics desired of paper cash: unforgeability, confidentiality, and divisibility. When we take a second glance at digital cash protocols, considering them in the wider context of smart contract design, we see that these protocols can be used to implement a wide variety of electronic bearer securities, not just cash.
The difference this time is, I suspect, verifiable computing. What is that? It is simply the ability to compute, and to know we have computed correctly.
We’ve always been interested in this. It was an early esoteric topic of computer science, and indeed the team in which I did my undergrad thesis was involved in precisely that – replicated computing for verifiability and reliability. The space shuttle that then qualified as new tech had 5 IBM “mainframe” computers on board – 3 in a voting loop, 1 monitoring and 1 spare. Or something, I forget the details.
Then and later, the capability to do computing and verify the computing was done correctly was considered a pipe dream. The reason for this skepticism was that the favoured solution involved some form of voting. If done in hardware, the point was ruined because we now had a single point of failure – the voting machine – and if done in software, we still had a source of bugs in the voting. Further, the cost was not like 3x, it was more like 10x, and for that money we could generally afford to build failures into the model. So the business never really took off, it was too much money for too elusive a result.
Now with Bitcoin’s blockchain, we can do verifiable computing. Just flipping over to Casey’s blog, because it is his post that has coalesced this thought in my mind:
Axis 1 => spectrum of verifiability
A python script is somewhat verifiable I would say. If a python script is running on somebody else’s metal your ability to verify what its doing is usually limited to observing its results.
Even if you could verify the code that was being ran via some fingerprinting mechanism you still wouldn’t necessarily be able to verify the execution environment of that script.
The environment is important because the same script can run different ways depending on its environment. Scripts can read top level environment variables, and run differently on a different version of their language. The practical upshot here is that nobody really has a capability to verify code that’s running on someone else’s metal.
And this is one of the powerful capabilities which smart contracts offer to users.
Smart contracts completely isolate the logic and data into a “casing” (provided by a blockchain) which is utterly verifiable. Every compute step along the logic sequence is verified by every node on the network.
Those nodes could be other banks within a consortium, internal audit, external audit, the business’s accounting department, your grandmother, or whomever is in the network. But all of these nodes will be checking each other’s work.
Simply put, all of the computation is performed (and, checked) by all of the (full) nodes on the network.
Down to popping off the stack computes.
Now this is overkill for many, many computing requirements which an enterprise may have (indeed the vast majority of an enterprise’s computing requirements do not need this level of computation verifiability).
But for instances where one has a data driven relationship (whether that is a compliance relationship, a customer relationship, or a peer relationship) it may be a price which institutions are willing to pay. In some contexts.
But. And this is the key. It is certainly very different than a simple python script running on someone else’s metal.
That Bitcoin’s smart contracts achieved verifiable computing could almost be said to be an accidental result. It’s not clear that Satoshi Nakamoto was heading in this direction. He wanted smart contracts, but did he want verifiable computing? Of course it is easy to claim that he did and the result is the proof, but I wonder if there is some serendipity here?
There’s definitely some blowback – as history has shown. The post-Nakamoto core team unwound many of the features, and in that disappointment, sparked the fork that is Ethereum. This effort to get back to the full Turing mojo of a universal /and now verifiable/ computer has now kicked back into the Bitcoin efforts, and a while back, Blockstream released “Essentials” with many of the goodies turned back on.
So, one thing we can say about smart contracts and verifiable computing is that this is not easy conceptual stuff – this is thinking and this is development that is on a plane with the original Turing times; the development of much of which we now take for granted.
Indeed, if we look at what was historically written, picking up from that first cite:
We also see that to implement a full customer-vendor transaction, we need more than just the digital cash protocol; we need a protocol that guarantees that product will be delivered if payment is made, and vice versa. Current commercial systems use a wide variety of techniques to accomplish this, such as certified mail, face to face exchange, reliance on credit history and collection agencies to extend credit, etc.
etc etc, Szabo is talking about guaranteeing the result. Indeed if you ask anyone in the Bitcoin world what that is about, they’ll sing in chorus – multisig! Which he goes on to mention. Which is not what we’re talking about here.
These are tools to achieve verifiability over the results. If you like, we could call this transactional thinking. But verifiable computation goes beyond the results, as Casey said. To repeat:
Simply put, all of the computation is performed (and, checked) by all of the (full) nodes on the network. Down to popping off the stack computes.
This overall thinking is very much part and parcel of the original thinking by Nick Szabo, who conceptualised the smart contract as far back as 1994, which if memory serves was when he was working at DigiCash.
To me, this opens an open question – was the vision of smart contracts linked to verifiable computing?
Smart contracts reference that property in a dynamic, often proactively enforced form, and provide much better observation and verification where proactive measures must fall short.
Verification (my emphasis) is certainly there, but the text is mostly visionary rather than particular, in computer science terms. And thus I’m not sure – but I can also see why it wouldn’t be stressed. If Nick had said he was really talking about verifiable computing, then he’d actually have made the job harder because by then we already knew enough about it to say it was a pipe dream.
Or so it seemed, until 2009.
This makes Bitcoin’s version of a smart contract a much more interesting concept, and a much more revolutionary one. Solving verifiable computing is definitely top draw stuff – we’ve gone from theoretically troubling and implausible to practically doable in one invention.
Why is this revolutionary? Here’s maybe why.
If we have verifiable computing we now have a trusted computing platform! Or to use today’s jargon, a trusted execution environment, or TEE. TTP (trusted third party), anyone? HSM (hardware security module)? OK, so this one isn’t going to be good at keeping secrets, but there are other things we need to do in a fashion worthy of our trust.
Another example – instead of talking about IoT and toasters, let’s talk about running a space shuttle on a blockchain. If we construct our reliable platform as several competing devices that can only work on our personal blockchain, then *the problem of trusting our hardware goes away*.
See where this goes? Think of your car sharing computing power with the traffic. Think of jacking in to the airplane’s entertainment system and borrowing some cycles from the fuel system to complete your render before touch-down. You can repay the plane back as it is computing its descent into land. Or, you’re arguing the accounts for your group’s savings and can’t agree on who’s machine to use, because someone always steals the money. To solve the dilemma, spin up a private, on-demand, dynamic chain (PODchain?) on everyone’s phones, communicating the accounts smart contract over bluetooth, verifiably do the math, and shut it down.
Done! Time for beer. And share the costs with another PODchain including the cash register, right? Verifiable computing in the form of the Bitcoin-inspired smart contract may well change our views about the Turing machine. Or at least our understanding of the performance envelope of reliable computing. That’s gotta be worth something, a prize or something 😉
(Top. Part 2. Part 3.)
You may also like
The Uniswap Governance Debacle
This is another story of why Governance and Tokenomics is so important for the valuation of a DeFi project. Uniswap is a household name in the DeFi space. They are the Apple of the crypto industry. They have great investors, a great team, and they…
Coinscrum – Cross pollination in an agnostic blockchain community
By Yatu Yoga ::Blockchain Anthropologist exploring the Blockchain, Fintech, and Crypto space :: PHD Researcher at Golsmiths College, London —- The origin story of Coinscrum begins in September 2012, with five guys tentatively meeting up at a pub called the Cleveland Arms to discuss Bitcoin,…
Network Economies in the Year 2049
By Oscar Pacey :: Network Economies Adviser at Tranquility Node —- We’re early adopters, which must mean the tech doesn’t do what we want it to do yet. Bitcoin isn’t a currency… yet, security tokens don’t provide any new liquidity or features… yet and decentralised…
The DAO: How to not fuck it up
Written by: Jack du Rose For the last fortnight, we, the Ethereum community, have been creating “The DAO”.We’re at ~$130M right now (subject to the price of Ether), and that I’m sure will jump substantially before the creation period ends.For the uninitiated, “The DAO” is,…
Subscribe to us
Understanding your dog for dummies cheatsheet
Subscribe to us
Understanding your dog for dummies cheatsheet | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00689.warc.gz | CC-MAIN-2023-14 | 11,569 | 56 |
https://tng.lythgoes.net/wiki/index.php/Rip_Prevention_Mod | code | Rip Prevention Mod
|Downloads of Rip Prevention Mod are restricted to logged in users. If you do not have a user account on the TNG Wiki use the Request Account link to request a user account|
|If you are having trouble downloading with the Google Chrome browser, try using try right-click and select Open in new Window, then F5, or use another browser|
|The latest version of this mod uses the guidelines for TNG v12+ cust_text.php files. If you are using TNGv12+, and any cust_text.php file in this mod is marked with a Bad Target error, you need to update your cust_text.php files before you can install this mod. [Show instructions]
If you upgraded to TNGv12+, and did not update your cust_text.php files as instructed in the upgrade readme script, then you must use the TNG Mod Manager to update them. To do so:
[See details in the TNGv12 Change Impacts Article][Hide the instructions]
|Rip Prevention Mod|
|Summary||Prevents rapid access to site|
|Validation||V8.1.2a is XHTML compliant.|
|Mod Updated||2 July 2021|
|Download stats||See download statistics|
|Homepage||Rip Prevention Mod (This page)|
|Mod Support||TNG Community Forums|
|Min TNG V||7.0.0|
|Max TNG V||13.1.1|
May require customization
Purpose of the mod
This TNG modification includes several features:
- Rip Prevention
- It can help deter some of the automated processes that simply rip (copy) our sites for potential commercial gain. It monitors the time interval between visitor accesses. If the accesses are rapid and repeated, a warning is issued. If the accesses continue rapid and repeatedly, the visitor is temporarily banned and an explanation page is displayed. Warnings and bans are disabled for administrators.
- A few rapid accesses will result in a warning page. Simply wait a few seconds and operation will return to normal. Repeated rapid access will result in a 60 second ban where an explanation page is displayed.
- To deter those slow automated programs that scrape your site, you may optionally add a Rip Prevention Captcha challenge. If added, you may configure the number of visits within a specified time that you will be allowed before a graphical challenge is presented to confirm the visitor is human and not an automated program.
- Site Access Statistics Page
- In addition to the warning and ban pages, a simple access stats page is available on the admin page. The left banner will include an entry (in English) "Show Access" which will display the current stored access information. I have sorted it by Bans, Warnings, Total fast visits, and Total accesses. A button is included to optionally sort the information using alternate columns.
- Custom Message to browsing IP address
- Create a custom message to specific IP addresses that are browsing your site. This feature allows selection of the IP/Host that is viewing your site, adding a custom message, and monitoring whether they have viewed it or not. The custom message will be displayed for a selectable (1-9) page views, then disappear. The number of page views left will be displayed in the site statistics page. To view this in action, check the link above to the statistics page, sort by time, and your address should be at the top. Since you are not logged in as administrator, the messaging system will only work for your originating IP address/hostname. A default message (max 1000 characters) is included, but may be customized.
- Specific Host Banning
- Provides a convenient way to permanently ban specific user accesses based upon host name. This feature allows selection of specific host names on the "Show Access" page by using the "B"an button. Banned host names will be displayed with their IP address bolded in a red font. Clicking the "B"an button of a banned host name will remove the ban. Users logged in as admin will not be banned, but be careful. Banning your IP, then logging out will cause you to be banned. (You'll have to log in from an alternate IP and then unban yourself).
The Rip Prevention Mod was developed by Brian McFadyen.
The Rip Prevention Mod was updated for MySQLi support by Brent Hemphill. Table deletion, good indexer list, forcing admin only access, and some messaging updates by Steven Davis. French text provided by Katryne Chauvigné-Bourlaud. French text provided by Ingrid Schuster.
|126.96.36.199b||2 July 2021||update mod to fix a parse error.|
|188.8.131.52a||29 June 2021||update mod to fix a parse error, a few hidden characters in text strings, and fix some HTML validation issues.|
|184.108.40.206||27 June 2021||update mod to allow paging of the Show Access page. Added additional bots to the consolidation list. German text thanks to Ingrid Schuster.|
|220.127.116.11||8 November 2020||update mod for TNG 13. Added the ability to delete the table, fix issues with the messaging system, updated the good indexer list, updated to HTML5 validation, and make the text customizable language strings. Thanks to Steven Davis. French text thanks to Katryne Chauvigné-Bourlaud.|
|18.104.22.168e||9 December 2019||update mod to fix some PHP 7.4 notices. Also updated the ban and warning screens to HTML5 stylizing thanks to Randal Suire.|
|22.214.171.124d||22 November 2018||update mod to fix an issue with IPv6.|
|14 September 2018||update mod to fix an issue with the Show Access being unreadable in Templates 9, 13, and 14. Also fixed a possible issue with the Click Counter II Email Notify mod.|
|126.96.36.199a||4 September 2018||update mod to add option for Rip Challenge Mod to only show once per session. Also added notes to options about hits being counted multiple times in instances of redirection.|
|11.0.2||8 June 2017||update mod to be MySQLi compatible. Also fixed some other deprecated functions that were used. Added user to the table. Added the functionality for a renamed extensions folder.|
|10.1||8 June 2017||update mod to be MySQLi compatible. Also fixed some other deprecated functions that were used. Added user to the table.|
|8.1.2a||1 April 2011||update mod to be XHTML compliant provided by Alan Craxford who made rip_ban.html and rip_warning.html XHTML compliant|
|8.1.2||30 May 2010||rip_prevention_v8.1.2 contains More corrections. Looks ok now|
|8.1.1||30 May 2011||Updates to Rip Package, fixed for V8.0 and new file structure|
|v8.1.0||May 28, 2010||updates for TNG V8|
|v1.4||Jun, 2009||misc updates|
|v1.2||Jun 1, 2009||updated with missing genlib modification to support Messaging to visitors|
|v1.1||May 31, 2009||initial release|
- A working TNG installation.
- A backup of your TNG begin.php and admin/leftbanner.php (V7) or admin_leftbanner.php (V8) files.
- An installed current version of the Mod Manager.
- This mod also requires that your admin folder be writeable.
- Download the appropriate mod configuration package as identified in the upper right status box.
- Extract and copy all files in the zip to your ./admin/mod_folder (V7) or ./mods (V8).
- Follow the normal automated installation for Mod Manager mods. This mod also requires that your admin folder be writeable.
- As with most source code modifications, these changes will likely be overwritten during your next TNG revision upgrade and will need to be re-implemented. (Using the Mod Manager greatly simplifies this process).
Although the default values in the check_access.php file generally work well, the parameters may be customized
- visitor access speed detection
- number of fast accesses before a warning is displayed
- number of fast accesses before a ban is displayed
- length of time for a visitor ban
- number of visits within the specified time before a captcha challenge is presented
- number of seconds (specified time) over which the number of visits is monitored
- whether to only present a captcha once per session
- the schedule on which to show user messages
The above parameters may be changed by using the EDIT button in the Mod Manager Status Table for this mod. Additionally, names of the valid search engine indexers may be added by editing the 'check_access.php' script.
How do I use the Messaging Feature
When viewing the site statistics page, locate the particular IP/Hostname of interest. The numbered button in the Host Name column indicates how many views of the message are left. If the number is 0, then clicking on the button will bring up a text edit box with a default message. Edit this, or leave it as is and click ADD. The page will refresh, highlight the Host Name in red, and indicate the number of message views remaining on the button.
If you click a button with a non-zero on it, it will remove the message request.
How do I BAN a specific IP address
If you have one of those pesky browsers that keep returning, and refuses to contact you or contribute, you might choose to BAN them from your site. (This is not my choice of action, but it may suit some) Clicking the 'B' button will result in that address being banned permanently. To "undo" a ban, simply click the 'B' button again. The page will refresh and highlight the IP address in red if it is Banned. Keep in mind that this is simply an IP ban. Your annoying browser can simply go to another address and continue again.
How do I remove old entries, or the very low count accesses
Table maintenance is provided while viewing the access table in the "Show Access" under the admin area. Selecting either of the Date or Page Hit "Sort" buttons will display the "Del" option button to enable record deletions. Clicking the "Del" button will display the access page with an individual button in each of the date or hit cells. These cell buttons will immediately remove all table entries before the date selected, or all table entries with fewer page hits, depending on the "Del" operations that has been activated.
Optional CAPTCHA Challenge page
An additional feature may be added to the Rip Prevention to present a CAPTCHA challenge page after a specific number of page views by non registered users. This is one more attempt to reduce the level of automated ripping. Check out the Rip Challenge Mod if you would like to add this feature.
In the event of a problem with your TNG site
- copy your backup begin.php to your base TNG folder
- copy your backup leftbanner.php file back to the admin sub folder
- all should be well now | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00723.warc.gz | CC-MAIN-2022-27 | 10,248 | 78 |
https://thinkinglabs.io/articles/2023/12/30/but-agree-as-a-team-to-never-break-the-build-is-like-agreeing-to-never-produce-a-bug.html | code | Months ago, I made the observation that engineers seem to enjoy administrative tasks seeing how much affection they show for the Pull Request. Malik reacted to this with “Show me a different process that guarantees a green mainline”. Manifestly, the answer to that is: Agree as a Team to Never Break the Build. To this, Malik replied: “‘agreeing to never break the build’ is like agreeing to never produce a bug… It’s nonsensical, why not prevent the issue in the first place instead of playing a blame game where the developer is bound to fail at some point?”. In all honesty, I appreciate Malik. We do not often agree online. But we are somehow aligned on the outcomes, i.e. have a green mainline. We just use different techniques to get there. Having said that, I decidedly disagree with Malik.
Update Jan 22nd, 2024: Clarify the unlikely event the build still breaks.
Agreeing to never break the build and agreeing to never produce a bug are assuredly two completely different things. We can consistently accomplish to never break the build. The same cannot be said about never producing bugs.
Before pushing changes to mainline, it requires everyone in the team to pull the latest changes, integrate them with their local changes, execute a local build and to commit and push only on green. When the build is red, it is by all means prohibited to push code. If the build is green, we can however push. When we say, execute a local build, it means compiling the code and running all unit tests. Despite that, we do not need to execute integration and acceptance tests as part of the local build. These can be deferred to the central build. Undeniably, this assumes having a vast amount of high-quality automated tests. We certainly need a decent amount of unit tests that cover the core functionality to give us enough confidence that when unit tests pass, integration and acceptance tests will pass with high probability. If that is not the case, if an acceptance test fails, we should try to reproduce this with a unit test. This will grow the set of unit tests and increase that confidence.
If, in the unlikely event the build breaks. That could be caused by a failing commit build. However, that should be outright rare. If this happens all too often, there is a serious problem. It might be the team does not fulfil some of the previously mentioned requirements. Or, it could be the integration or the acceptance tests fail. That might happen more often, as the local build does not cover those tests. At this point, we fail to satisfy the agreement. Right then, the team stops all work without hesitation. The team owns that failure. They do not push new code on top of that broken build. Because the new code will aggravate the situation even more. Only once the team has fixed the build, they can move on with the tasks they were busy with.
Having said all that, we should still strive to never break the build. This practice of not breaking the build is non-negotiable. No single acceptable reason allows us to break with this agreement. If we do not follow this agreement, Continuous Integration falls flat. Without Continuous Integration, our software is broken until someone else proves it works. Commonly, through expensive manual testing. Inevitably, this introduces a cost of delay.
This is one of the two most important practices to adopt to succeed with Continuous Integration as a team. The other practice is to make all changes in small increments.
Typically, when I share this, I receive the common reaction: “But, this is impossible. We cannot do this. Our build takes an outrageous amount of time to complete.”. Surely, it will not work with a +20-minute build. When builds take too long, two things can happen. Either people do not execute a local build before committing and pushing. This might violate the agreement of never breaking the build. In that case, we are back to having no Continuous Integration. Or people tend to run the local build less often. Under these circumstances, they start to batch up work. Lean manufacturing taught us that batch work reduces feedback. Consequently, it also drives down quality and increases time to market. Therefore, to succeed with Continuous Integration it is imperative to have a fast build.
Like Daniel Sandberg brilliantly nailed it in a reaction to the ACCU Conference’s video of The Practices That Make Continuous Integration:
- If the build is slow then something needs to change (design, tools, frameworks, technology).
- If it’s hard to run the tests locally something is wrong too.
- If software engineers need to follow a wiki page and install/configure 17 different things manually to get the build/tests to work locally, something is undoubtedly wrong.
- If we have to commit, push and wait for a build server instead of running it locally many times per minute, we cannot take many many small steps and we will go slower.
- If people push broken code that has not been run locally and breaks mainline, the culture needs to change.
I would add to this list, the fire-and-forget mode of building that involves pushing and waiting for a build server instigates lots of work in progress and context switching which devastates productivity.
Often, this reality is painful for teams to admit. To avoid this harsh confrontation, they adopt feature branches and pull requests to treat the symptoms, but not the cause. As a result, teams are mostly not aware of the symptoms, do not even see they could go faster and as such do not improve anymore. The hardest part of change is accepting the reality, letting go of the habitual beliefs and the commonly accepted practices and being open to other, oftentimes uncommon but so much more effective practices.
If it hurts, do it more often. Bring the pain forward.
– Dave Farley, Continuous Delivery book
To conclude, we can consistently meet the agreement to never break the build. But it requires to follow a series of practices. Especially, to have a fast build. Team discipline is vital for this to work.
On the other hand, promising to never produce a bug is, as opposed to not breaking the build, truly unpredictable. Zero defects cannot steadily be secured. Even if all automated tests pass, enough eyeballs have scrutinised the code, all requirements have been fulfilled, we yet do not know whether we have zero bugs. Unquestionably, bugs there will be. That is certain. But, Absence of Errors Fallacy learns us that it is only in the heat of the battle, in front of the user, in front of many much more users, that we will eventually know how nasty and how painful those bugs are.
Automated tests only cover what we know. They do not cover the many different unforeseen ways users will use our IT systems. Often, we engineers, are pretty naive on the possible uses of our systems. That is ok. As long as we can recover fast as we learn from production.
Code reviews, even with many eyes, are yet limited in spotting errors. Research has found that the amount of additional bugs found does not scale linearly with the number of reviewers. There is a small maximum of useful reviewers, between two and four. Any additional reviewers above this number uncover bugs at a much lower rate. The Heartbleed SSL security bug invalidated Linus’s Law - given enough eyeballs, bugs are shallow - as the bug went unnoticed for two years. It allowed hackers to view the affected website’s traffic unencrypted for two years.
Of course, we can try to predict user behaviour. But there are limitations in how much effort we put into trying to predict. No question, we will nonetheless miss ways on how our systems can be used. Exploratory testing will without question help in uncovering the unknowns. Again, production will still be the only genuine test. As IT systems are complex systems, our response should be probe-sense-respond. We conduct safe-to-fail experiments. We do not do fail-safe design. If an experiment succeeds, we amplify it. If the experiment starts to fail we dampen it. We get an emergent order. That is the only way to solve unknown problems. But, it requires to work obliquely, not directly.
Lastly, why not prevent the issue in the first place instead of playing a blame game where the developer is bound to fail at some point? If the developer is bound to fail, it is a system and team failure. The team did not set things up to avoid human failure. Amongst others, these are the points Daniel mentioned. If this leads to a blame game, there is a way bigger problem that requires immediate attention. There is a lack of psychological safety. At this point, a broken build is the least of the team’s concerns. Research has shown that psychological safety is the one factor that predicts high-performing teams. Not skills, not green mainlines, but being able to make mistakes and learn from it.
Agreeing to never break the build is a different thing from agreeing to never produce a bug. The first can be persistently accomplished given discipline and the right practices. The second is in the realm of complexity, uncertainty and hazard. Any promise in that area will be false.
The real reason teams turn to pull requests is a lack of skills, practices and most likely not being aware of more effective approaches. Regrettably, the practice of pull requests slows teams further down. Surprisingly, teams will paradoxically not notice the underperformance.
Redbreast, Black Bottle Island Smoke, Tamdhu, The Balvenie for their inspiration.
- The Practices That Make Continuous Integration, Thierry de Pauw
- Absence of Errors Fallacy, Nurhayat Koklu
- Linus’s Law, Wikipedia
- Facts and Fallacies about Software Engineering, Robert Glass
- Given Enough Money, All Bugs Are Shallow, Coding Horror
- Notes about the Cynefin framework
- Notes about Inclinations & Dispositions
- What Google Learned From Its Quest to Build the Perfect Team | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476211.69/warc/CC-MAIN-20240303075134-20240303105134-00148.warc.gz | CC-MAIN-2024-10 | 9,910 | 35 |
https://enginebystarling.com/platform/learn-more/ | code | Engine by Starling
Get more information on our class-leading technology and how we’re helping banks improve customer experience and back office operations.Download PDF
Engine comes out of the box, with pre-integrated components that you need to run a bank. The platform is designed to deliver better outcomes for customers & employees.
We operate a multi award-winning UK digital bank. You’ll get our fine-tuned operations and processes developed over 7 years, built by people who understand banking.
With Engine, you get far more than vanilla banking products- rich with digital features and flexible enough that you can create your own innovative journeys.
As a SaaS offering Engine is constantly improving and evolving. We continue to develop, iterate and innovate the platform, so that you can focus on running your bank. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818740.13/warc/CC-MAIN-20240423192952-20240423222952-00600.warc.gz | CC-MAIN-2024-18 | 829 | 6 |
http://frevo.sourceforge.net/javadoc/0.91/core/AbstractRanking.html | code | Sorts the given array of representations in a descending order of fitness and returns the number of evaluation that was required for the ranking to finish.
Evaluation of the candidates is done by the provided problem component.
representations - The population to be sorted.
problem - The problem descriptor to be used for evaluation.
random - The random generator object used for sorting.
the number of evaluations that were needed to rank the given set of representations | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648103.60/warc/CC-MAIN-20180322225408-20180323005408-00374.warc.gz | CC-MAIN-2018-13 | 473 | 6 |
https://danielsaidi.wordpress.com/ | code | I am a system, app and web developer who lives in Stockholm, Sweden, where I work as iOS lead at BookBeat.
Checkout my personal web site at http://danielsaidi.com for more up to date information about what I'm currently doing.
I started this code-oriented blog in January 2009, eight long years ago. I started blogging at Blogger, then moved over to WordPress.
I will now move once more. From now on, I will blog from my own web site:
See you there (I hope).
Required fields are marked *
Notify me of new comments via email.
Blog at WordPress.com. | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119838.12/warc/CC-MAIN-20170423031159-00278-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 547 | 8 |
https://dilite-project.eu/topic/6-4-2-activity-2/ | code | Description of the Learning Activities
Materials/ Equipment Required
Learners will effectively develop digital literacy through literacy storytelling activities, for example, using technological tools including cameras, microphones and video editing software.
This activity is adapted from:
Chan, B.S., Churchill, D. and Chiu, T.K., 2017. Digital literacy learning in higher education through digital storytelling approach. Journal of International Education Research (JIER), 13(1), pp.1-16.
This activity could be a continuation of the topic explored in the previous activity, or a different one.
Learners will firstly be separated into groups of 3-4 people. Each group will need to create a story based on the topic under discussion in literacy teaching by using the storyboard methodology (storyboard drawing and script writing) – 20 minutes
As a next step the team will need to collect photos that will be featuring the storytelling (they can use Google – Images).
Next the team should use a recording app through their computer to record the different voices/characters of their script. Each photo will need to have one recorded file.
One suggested tool for creating their storyboard can be Movie Maker. The participants will need to put the photos collected in order, then match the audio document file with the selected photo.
The team can add text/subtitles to their storyboard.
Finally the team will publish their work as a .mp4 document on their desktop and share with the rest of the participants.
The activity can take place through an online platform (ZOOM, Microsoft Teams) where the team members can cooperate and talk at a virtual class.
The instructions for completing each step of the activity can be shared with participants through google forms.
A demonstration of how Movie Maker works at its basic functions can be made through share screen/ presentation mode. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103810.88/warc/CC-MAIN-20231211080606-20231211110606-00840.warc.gz | CC-MAIN-2023-50 | 1,886 | 15 |
https://www.erlang.com/topic/1-2082/ | code | If I understand what Honey is asking is how to know the place (CellID) where one subscriber is related to?
With MGTRP command, it just converts MSISDN to IMSI.
I think for the related Cell ID you can do like this:
copy the value after MTV- on printout,
PRINT VAR MTV(space)(paste):327;
(32 for CID, 328 for LAC)
Convert to decimal the hexa value at the end (ex: 4EDC), this is your CID. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679511159.96/warc/CC-MAIN-20231211112008-20231211142008-00221.warc.gz | CC-MAIN-2023-50 | 386 | 7 |
https://curity.io/docs/idsvr/latest/system-admin-guide/upgrade/9_0_X_to_9_1_0.html | code | The type of the account_id column of the accounts table in Oracle was changed from CHAR(36) to VARCHAR2(36) in the database creation scripts, to avoid trailing white spaces on the column values.
There’s no practical issue with the trailing white spaces, but the table definition may be updated to fix this detail for new records, if desired.
The simplest way to achieve this is via an ALTER TABLE statement. This requires a table re-write, which may not be ideal, depending on amount of data and system load.
If applying the change, take those factors into consideration, or use different approaches to achieve the end result.
The HTML Forms authenticator was updated to present password complexity requirements to the user and validate them in the browser.
To that end, the following Velocity templates were changed:
The Database Client-related types were modified to support various new features.
Custom Plugins using the modified Attributes types may need to be re-compiled.
The following (mostly non-breaking) relevant changes have been made:
New types were added for the Password Policies feature: for example, the existing
UserCredentialManager service now has a method called getCredentialPolicy which returns an object
which describes the configured policies.
This release introduces the EncryptedString type, which can be used in any Plugin Configuration interface
to represent secrets. These values are Strings which are transparently encrypted when persisted and decrypted
Custom SDK Plugins using this type are not automatically included by the reenc tool. Please refer to the updated
documentation of the reenc tool for information on how to include plugins that use EncryptedString. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817576.41/warc/CC-MAIN-20240420091126-20240420121126-00697.warc.gz | CC-MAIN-2024-18 | 1,698 | 16 |
https://help.univention.com/t/problem-warning-about-integrity-check-in-owncloud/13883 | code | Your Owncloud app reports issues about the integrity of some files.
Owncloud has introduced code signing and therefor reports code not being valid.
Unfortunately, the reason is not easy to determine. It might be:
- Issue with the check itself
- Outdated database containing the file signatures
- Wrong code due to virus or similar
Check your files again and report possible issues to Owncloud support. | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144150.61/warc/CC-MAIN-20200219122958-20200219152958-00078.warc.gz | CC-MAIN-2020-10 | 401 | 7 |
http://www.clareworks.com/id54.html | code | Goal and Problem Definition
the business objective and identifies timing and resource or other constraints.
Defines the business need in detailed terms following requirements analysis
methodologies in concert with client users.
reasonable solution options in work sessions with the client.
Appropriate Technology Scan
Examines the breadth of technologies available to
address the solution approach.
Solution Devil's Advocacy
the proposed solution before going into the next phases.
Develops an application oriented architecture in the context of a given industry's best practices and software
Project Plan Development
Builds a project plan with defined steps,
dates, responsibilities, and expected costs and results.
Coordinates project progress
including taking a project lead where required.
Tracks results against the original business and project goals. | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00226.warc.gz | CC-MAIN-2022-33 | 857 | 17 |
https://drudge.com/news/263339/scientists-create-baby-wormhole-lab | code | Friday, December 02, 2022
Caltech: Scientists have, for the first time, developed a quantum experiment that allows them to study the dynamics, or behavior, of a special kind of theoretical wormhole. The experiment has not created an actual wormhole (a rupture in space and time), rather it allows researchers to probe connections between theoretical wormholes and quantum physics, a prediction of so-called quantum gravity. Quantum gravity refers to a set of theories that seek to connect gravity with quantum physics, two fundamental and well-studied descriptions of nature that appear inherently incompatible with each other.
Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed. | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495012.84/warc/CC-MAIN-20230127195946-20230127225946-00289.warc.gz | CC-MAIN-2023-06 | 775 | 3 |
https://github.com/viur-framework/viur-vi | code | Vi is the web-based administration client for the ViUR framework.
Vi can be used as a submodule or from a zip-file to be used in your ViUR projects. It runs out of the box without further dependencies.
ViUR is developed and maintained by Mausbrand Informationssysteme GmbH, from Dortmund in Germany. We are a software company consisting of young, enthusiastic software developers, designers and social media experts, working on exciting projects for different kinds of customers. All of our newer projects are implemented with ViUR, from tiny web-pages to huge company intranets with hundreds of users.
Help of any kind to extend and improve or enhance this project in any kind or way is always appreciated.
We take great interest in your opinion about ViUR. We appreciate your feedback and are looking forward to hear about your ideas. Share your vision or questions with us and participate in ongoing discussions.
Copyright © 2022 by Mausbrand Informationssysteme GmbH.
Mausbrand and ViUR are registered trademarks of Mausbrand Informationssysteme GmbH.
You may use, modify and distribute this software under the terms and conditions of the GNU Lesser General Public License (LGPL). See the file LICENSE provided within this package for more information. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817699.6/warc/CC-MAIN-20240421005612-20240421035612-00579.warc.gz | CC-MAIN-2024-18 | 1,257 | 8 |
https://mail.python.org/archives/list/[email protected]/thread/7RUY5KO427HFZAIFHXYUUS36UWXIFAJC/ | code | It's come time to start moving stuff into the trunk, as opposed to just being in mercurial. This means the trunk will become a faster moving target. Bug fixes need to be committed to both yt-1.5 and yt if they are to be made.
The following things will be moved into trunk:
* Integer covering grids replacing standard covering grids; smoothed integer covering grids are not ready, so they will default back to floating point math. * Fixed resolution projections * Next-generation GUI including inline VTK support.
Additionally, I will be targeting the hierarchy optimization more heavily as well as a generalization of the floating point type used for baryon fields and positioning.
Documentation is proceeding apace, and I have constructed most of a cookbook, which I will be properly presenting later this weekend. | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038878326.67/warc/CC-MAIN-20210419045820-20210419075820-00158.warc.gz | CC-MAIN-2021-17 | 815 | 5 |
https://jira.mongodb.org/browse/EVG-15907 | code | EVG-13324 created a different status in the UI depending on whether the task was timed out by the exec time out or the idle time out.
I wonder, though: the old UI had these statuses for timed out tasks:
We didn't differentiate in the status based on exec vs. idle timeouts. There was a different place in the UI where we differentiated: we had a row which gave the timeout type here.
Also, does the status that gets included on BFGs need to differentiate? | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521883.7/warc/CC-MAIN-20220518083841-20220518113841-00279.warc.gz | CC-MAIN-2022-21 | 455 | 4 |
https://peerj.com/articles/6257/ | code | With the popularization of deep learning and the increasing size of medical datasets, there has been increasing interest in the use of machine learning to improve medical care. Several recent papers have described use of neural network or other machine learning techniques to predict future clinical outcomes (Rajkomar et al., 2018; Kwong et al., 2017; Miotto et al., 2016; Avati et al., 2017). The outcome measure is generally evaluated at one follow-up time point, and there is often little discussion of how to deal with censored data (e.g., patients lost to follow-up before the follow-up time point). This is not ideal, as information about censored patients is lost and the model would need to be re-trained to make predictions at different time points. Because of these issues, modern predictive models generally use Cox proportional hazards regression or a parametric survival model instead of simpler methods such as logistic regression that discard time-to-event information (Cooney, Dudina & Graham, 2009).
Several authors have described solutions for modeling time-to-event data with neural networks. These are generally adaptations of linear models such as the Cox proportional hazards model (Cox, 1972). Approaches include a discrete-time survival model with a heuristic loss function (Brown, Branford & Moran, 1997), a parametric model with predicted survival time having a Weibull distribution (Martinsson, 2016), and adaptations of the Cox proportional hazards model (Faraggi & Simon, 1995; Ching, Zhu & Garmire, 2018; Katzman et al., 2018). Most of the models assume proportional hazards (the effect of each predictor variable is the same at all values of follow-up time). This is not a very realistic assumption for most clinical situations. In the past, when models were typically trained using dozens or hundreds of patients, it was often not possible to demonstrate violation of proportional hazards. However, in the modern era of datasets of thousands or millions of patients, it will usually be possible to demonstrate violation of the proportional hazards assumption, either by plotting residuals or with a statistical test.
In this paper, we describe Nnet-survival, a discrete-time survival model that is theoretically justified, naturally deals with non-proportional hazards, and is trained rapidly by mini-batch gradient descent. It may be useful in several situations, especially when non-proportional hazards are known to be present, for very large datasets that do not fit in memory, or when predictor data is a good fit for a neural network approach (such as image or text data). We have published source code for the use of the model with the Keras deep learning library, which is available at http://github.com/MGensheimer/nnet-survival.
Materials & Methods
Relationship to prior work
In this section we describe prior approaches to the problem and illustrate some pitfalls that are addressed with our model.
Several authors have adapted the Cox proportional hazards model to neural networks (Faraggi & Simon, 1995; Ching, Zhu & Garmire, 2018; Katzman et al., 2018). This is potentially attractive since the Cox model has been shown to be very useful and is familiar to most medical researchers. One issue with this approach is that the partial likelihood for each individual depends not only on the model output for that individual, but also on the output for all individuals with longer survival. This would preclude the use of stochastic gradient descent (SGD) since with SGD only a small number of individuals are visible to the model at a time. Therefore, the entire dataset would need to be used for each gradient descent step. This is undesirable because it slows down convergence, cannot be applied to datasets that do not fit into memory (“out-of-core learning”), and could result in getting stuck in a local minimum of the loss function (Bottou, 1991).
An alternative approach that avoids the above issue is to use a fully parametric survival model, such as a discrete time model. See Section 7.5 of Rodriguez (2016) for a brief overview of discrete time survival models. Brown et al. proposed a discrete-time survival model using neural networks (Brown, Branford & Moran, 1997). This model can easily be trained with SGD, which is attractive. Follow-up time is divided into a set of fixed intervals. For each time interval the conditional hazard probability is estimated: the probability of failure in the interval, given that the individual has survived at least to the beginning of the interval. For each time interval j, the neural network loss is defined as (adapted from Eq. 17 in Brown, Branford & Moran 1997): (1) where is the hazard probability for individual i during time interval j, there are rj individuals “in view” during the interval j (i.e., have not experienced failure or censoring before the beginning of the interval) and the first dj of them suffer a failure during this interval. The overall loss function is the sum of the losses for each time interval.
The authors note that in the case of a null model with no predictor variables, minimizing the loss in Eq. (1) results in an estimate of the hazard probabilities that equals the Kaplan–Meier maximum likelihood estimate: . While this is true, the equivalence does not hold once each individual’s hazard depends on the value of predictor variables.
A more theoretically justified loss function, which we use in our model, would be the negative of the log likelihood function of a statistical survival model. This likelihood function has been well studied for discrete-time survival models in a non-deep learning context. Adapting Eq. (3.4) from Cox & Oakes (1984) and Eq. (2.17) from Singer & Willett (1993), the contribution of time interval j to the overall log likelihood is: (2)
This is similar but not identical to Eq. (1) and can be shown to produce different values of the model parameters for anything more complex than the null model (for an example, see the file brown1997_loss_function_example.md in our GitHub repository).
The proposed model using Eq. (2) naturally incorporates time-varying baseline hazard rate and non-proportional hazards if each time interval output node is fully connected to the last hidden layer’s neurons. The neural network has n-dimensional output where n is the number of time intervals, giving a separate hazard rate for each time interval.
There are several attractive features of the proposed model:
It is theoretically justified and fits into the established literature on survival modeling
The loss function depends only on the information contained in the current mini-batch, which enables rapid training with mini-batch SGD and application to arbitrary-size datasets
It is flexible and can be adapted to specific situations. For instance, for small sample size where we wish to minimize the number of neural network parameters, it is easy to incorporate a proportional hazards constraint so that the effect of the input data on the hazard function does not vary with follow-up time.
Follow-up time is divided into n intervals which are left-closed and right-open. Let [t1, t2, …, tn] be the times at the upper limit of each interval. The conditional hazard probability hj is defined as the probability of failure in interval j, given that the individual has survived at least to the beginning of the interval. hj can vary per individual according to the input and the weights of the neural network. The predicted probability of an individual surviving at least to the end of interval j is: (3)
The model likelihood can be divided either by time interval as in Eq. (2), or by individual. For a neural network trained with mini-batches of individuals, the latter formulation translates more easily into computer code. For an individual with failure during interval j (i.e., uncensored), the likelihood is the probability of surviving through intervals 1 through j − 1, multiplied by the probability of failing during interval j: (4) (5)
For a censored individual with a censoring time tc which falls in the second half of interval j − 1 or the first half of interval j (i.e., ), the likelihood is the probability of surviving through intervals 1 through j − 1: (6) (7)
It can be seen that individuals with a censoring time in the second half of an interval are given “credit” for surviving that interval (without this, there would be a downward bias on the survival estimates (Brown, Branford & Moran, 1997).
The full log likelihood of the observed data is the sum of the log likelihoods for each individual. In the neural network survival model, we wish to maximize the likelihood, so we set the loss to equal the negative log likelihood and minimize the loss by by stochastic gradient descent or mini-batch gradient descent.
Determination of hazard probability
For each time interval, the hazard probability will vary according to the input data. We have implemented two approaches to mapping input data to hazard probabilities:
With the flexible version, the final hidden layer (e.g., the “Max pooling” layer in Fig. 1) is densely connected to the output layer (the “Fully connected” layer in Fig. 1). The output layer has n neurons, where n is the number of time intervals. The log odds of surviving each time interval is equal to the dot product of the incoming values and the kernel weights, plus the bias weight. Then, using a sigmoid activation function, log odds are converted to the conditional probability of surviving this interval. With this approach, both the baseline hazard rate and the effect of input data on the hazard rate can vary freely with follow-up time. This approach is most appropriate for larger datasets or when the proportional hazards assumption is known to be violated.
With the proportional hazards version, the baseline hazard probability is allowed to vary freely with time interval, but the effect of input data on hazard rate does not vary with follow-up time (if a certain combination of input data results in a high rate of death in the early follow-up period, it will also result in a high rate of death in the late follow-up period). This is implemented by setting the final hidden layer to have a single neuron, and densely connecting the prior hidden layer to the final hidden layer without any bias weights. The final hidden layer neuron value is Xβ, where X is the value of the prior hidden layer neurons and β is the weights between the prior hidden layer and the final hidden layer. The Xβ notation is meant to echo that of the “linear predictor” in standard survival analysis, for instance section 18.2 of Harrell Jr (2015). The conditional probability of surviving the interval i is (adapted from Eq. (18.13) in Harrell Jr (2015): (8) where hbase is the baseline hazard probability for this time interval. The hbase values are estimated as part of the neural network by training a set of n weights, which are each transformed by a sigmoid function to convert baseline log odds of surviving each time interval into baseline probability of survival. These sigmoid-transformed weights, along with the final hidden layer value, contribute to the n-dimensional output layer according to Eq. (8). See class PropHazards in file nnet_survival.py in the GitHub repository. The proportional hazards approach is useful for small datasets where one wishes to reduce overfitting by minimizing the number of parameters to optimize. It also makes it easier to interpret the reasons for the model’s predictions. This version is very similar to a traditional proportional hazards discrete-time survival model using a complementary log–log link (see Rodriguez (2016), section 7.5.3: “Discrete Survival and the C-Log-Log Link”).
We implemented Nnet-survival in the Python language, using the Keras library with Tensorflow backend (code at http://github.com/MGensheimer/nnet-survival). A custom loss function is used which represents the negative log likelihood of the survival model. The output of the neural network is an n-dimensional vector survpred, where n is the number of time intervals. Each element represents the predicted conditional probability of surviving that time interval, or 1 − hj. An individual’s predicted probability of surviving through the end of time interval j is given by Eq. (3). An example neural network architecture using the “flexible” version of the discrete time survival model is shown in Fig. 1.
Each individual used to train the model has a known failure/censoring time t and censoring indicator, which are transformed into a vector format for use in the model. Vector survs has length n and represents the time intervals the individual has survived through; vector survf also has length n and represents the time interval during which failure occurred, if it occurred.
For individuals with failure (uncensored), for time interval j: (9) (10)
For censored individuals: (11)
The loss function is the negative of Eq. (13). The loss function is minimized using gradient descent; Keras performs automatic differentiation of the loss function in order to calculate the gradient. In our experiments, using the custom loss function extended running time very slightly compared to standard loss functions such as mean squared error.
The cut-points for the time intervals can be varied according to the specific application. In most of our experiments we have used 15–40 time intervals, spaced out more widely with increasing follow-up time. This ensures that around the same number of survival events fall into each time interval, which may help ensure reliable estimates for all time intervals. Other authors have suggested using at least ten time intervals to avoid bias in the survival estimates (Breslow & Crowley, 1974). In our experiments we have found that the model’s performance is fairly robust to choice of specific cut-points.
Performance evaluation: simulated data
We ran several experiments with simulated data to assess correctness of the model. The code is available in nnet_survival_examples.py in the GitHub repository.
Simple model with one predictor variable
We first tested a very simple survival model with one binary predictor variable. Five thousand simulated patients were created. Half of the patients had predictor variable value of 0 and were the poor prognosis patients. For this group, survival times were drawn from an exponential distribution with median survival of 200 days. The other half of the patients had predictor variable value of 1 and were the good prognosis patients. Their survival times were drawn from an exponential distribution with median survival of 400 days. For both groups, some patients were censored; censoring time was drawn from an exponential distribution with median value / half-life of 400 days. This survival model used the flexible version of nnet-survival (i.e., non-proportional hazards) with no hidden layers and 39 time intervals spanning the range of 0 to 1,780 days.
To evaluate the correctness of this model, we created calibration curves: we plotted and compared actual vs. model-predicted survival curves for the two groups. For each of the two groups, a Kaplan–Meier survival curve was plotted to show actual survival. Then, for each group, a model-predicted survival curve was generated: for each follow-up time point, the average of predicted survival for all patients in that group was calculated and displayed.
Optimal width of time intervals
We investigated whether model performance depended on time interval width. Similarly to the prior example, we simulated a population of 5,000 patients with one binary predictor variable. Survival time distribution was generated using a Weibull distribution, with scale parameter depending on the predictor variable value. Median survival time for the overall population was 182 days. We used the flexible version of nnet-survival to predict survival time. Four options for time intervals were evaluated:
Uniform intervals with width of 1 year
Uniform intervals with width of 1 month
Uniform intervals with width of 1 week
Increasing width of intervals with increasing follow-up time, with half-life for interval width of 1 year. Specifically, the time interval borders were placed at: for x in [0.0, 0.05, 0.10, …, 0.95]
Discrimination performance was assessed with Harrell’s C-index.
Convolutional neural network for MNIST dataset
One area in which neural networks have shown clear superiority to other model types is in analysis of 2D image data, for which convolutional neural networks provide state-of-the-art results. We wished to demonstrate use of a convolutional neural network as part of a survival model. For this, we used the MNIST dataset (Lecun et al., 1998). This dataset includes images of 70,000 handwritten digits with gold-standard labels, divided into a training set of 60,000 digits and a test set of 10,000 digits. We created a simulated scenario in which each image corresponds to one patient, and patients with higher digits tend to have shorter survival. The images could be imagined as an X-ray images of tumors, with higher digits representing larger, more deadly tumors. The goal of the model is to predict survival distribution for each test set patient.
We used only images with digits 0 through 4, leaving 30,596 training set images and 5,139 test set images. Image size was 28 × 28 pixels. Patients’ survival times were drawn from an exponential distribution with scale parameter depending on digit. The scale parameter (with units of days) was set to (14) with the probability density function being: (15)
Therefore, median survival ranged from 365 days for digit 0 down to 10 days for digit 4. The setup is illustrated in Fig. 2.
A five-layer neural network architecture was used, with two convolutional layers of kernel size 3 × 3, followed by a max-pooling layer, a fully connected layer of size 4 neurons, then the output layer. The flexible version of the nnet-survival model was used, so that non-proportional hazards were possible. The Adam optimizer was used. Model performance was evaluated using the C-index to measure discrimination, and calibration curves (actual vs. predicted survival curves) to evaluate calibration. As the Nnet-survival model is flexible and the predicted survival curve for each patient can have a different shape, there is no unique ordering of patients by prognosis (i.e., when comparing two patients, one could have a higher probability of 1-year survival but the other could have a higher probability of 2-year survival). Therefore, to calculate C-index, the model’s predicted probability of 1-year survival was used to rank the patients.
Performance evaluation: SUPPORT study (real data)
We evaluated the performance of the Nnet-survival model and other similar models using real patient data. We wished to use a publicly available dataset with time-to-event data on a large number of patients. With a large sample size, we could use data splitting to formally test model performance, and would also be able to evaluate for violations of the proportional hazards assumption of the standard Cox proportional hazards model.
For the real dataset, we chose to use the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT) (Knaus et al., 1995). In this multicenter study, 9,105 hospitalized patients had detailed data recorded including diagnoses, laboratory values, follow-up time and vital status. The dataset is publicly available on the Vanderbilt Biostatistics web site. The task for the survival models was to predict each patient’s life expectancy with good discrimination and calibration.
Some patients had missing values for one or more predictor variables; in this case we imputed the missing data by using the median value in the sample, or for laboratory values, using the recommended default value listed on the Vanderbilt Biostatistics web site. If more than 4,000 patients were missing a value for the variable, that variable was excluded from analysis. After processing, there were 39 predictor variables. Patients were divided with a 70%/30% split into training and test sets. The processed dataset is available at our project’s GitHub page.
We tested four models on the SUPPORT study dataset:
Our model, Nnet-survival (flexible version, so that non-proportional hazards were possible)
Cox-nnet (Ching, Zhu & Garmire, 2018)
Deepsurv (Katzman et al., 2018)
Standard Cox proportional hazards model
All three neural network models used a simple multilayer perceptron architecture with a single hidden layer. The Cox-nnet default parameters specify a hidden layer size of 7 neurons when input dimension is 39, which we felt to be a reasonable choice, so a hidden layer size of 7 was used for the three models. For all three neural network models, L2 regularization was used to help prevent overfitting. The regularization strength parameter was chosen using 10-fold cross validation on the training set, using log likelihood as the performance metric. No regularization was used for the standard Cox proportional hazards model. For Nnet-survival, 19 follow-up time intervals were used, extending out to 6 years (around the maximum follow-up time of the SUPPORT study), with larger spacing for later intervals due to the decreased density of failure events with increasing follow-up time. The RMSprop optimizer was used for Nnet-survival.
As Cox-nnet and Deepsurv only output a prognostic index for each patient, not a predicted survival curve, we generated predicted survival curves for these methods by using the Breslow method to generate a baseline hazard function (Breslow, 1974).
To evaluate the models’ discrimination performance, we used Harrell’s C-index to assess discrimination. To calculate C-index for Nnet-survival, the model’s predicted probability of 1-year survival was used to rank the patients.
To evaluate model calibration, we used a published adaptation of the Brier score for censored data (Graf et al., 1999). This was implemented using the ipred R package. We also created calibration plots for specific follow-up times (Royston & Altman, 2013).
We tested the running time of each method by fitting each model using a range of dataset sizes. Simulated datasets ranging from 1,000 to 1,000,000 patients were created by sampling from the 9,105 SUPPORT study patients with replacement. Each combination of model and sample size was run three times and the results were averaged. Each model was trained for 1,000 epochs. An Ubuntu Linux server with 3.6 GHz Intel Xeon E5-1650 CPUs and 32GB of RAM was used. The models were constrained to run on one CPU core. Python version 3.5.2 was used for the Nnet-survival, Cox-nnet, and standard Cox proportional hazards models; Python version 2.7.12 was used for Deepsurv. R version 3.4.3 was used to calculate Brier scores (R Core Team, 2017). The code for the SUPPORT study analysis is available in support_study.py in the GitHub repository.
Simple model with one predictor variable
We tested a simple survival model with one binary predictor variable and no hidden layers. The calibration curves for the two groups of patients are shown in Fig. 3. It can be seen that calibration is excellent: the actual and predicted survival curves are superimposed.
Model convergence was found to be reliable. The model was optimized repeatedly with different random starting weights and converged to very similar final loss/likelihood values.
Optimal width of time intervals
The discrimination performance of the survival model was robust to various choices of time interval width and configuration (constant width, or increasing width with increasing follow-up time). For each of the four time interval options, discrimination performance was identical with C-index of 0.66.
Convolutional neural network for MNIST dataset
We used the MNIST dataset of handwritten digits to simulate a scenario where each patient has an X-ray image of a tumor, and survival time distribution depends on the appearance of the tumor. Digits 0 through 4 were used, with lower digits having longer median survival. The model’s task was to accurately predict survival time for each patient. There were 30,596 images in the training set used to train the model’s weights, and 5,139 in the test set used to evaluate model performance.
The Nnet-survival model produced good performance. C-index for the test set was 0.713, compared to 0.770 for a “perfect” model that used the true digit as the predictor variable. Calibration was excellent, as seen in Fig. 4.
Support study (real data)
Four survival models (Nnet-survival, Cox-nnet, Deepsurv, and a standard Cox proportional hazards model) were tested using the SUPPORT study dataset of 9,105 hospitalized patients.
We found that several predictor variables violated the proportional hazards assumption of the standard Cox model, with an example given in Fig. 5. This provides an opportunity for our Nnet-survival model to have improved calibration compared to the other three models.
All models were trained/fit using the 70% of patients in the training set (n = 6,373). Then, performance was measured using the remaining 30% of patients in the test set (n = 2,732). Discrimination performance was very similar for all models, with test set C-index around 0.73 (Table 1). Table 1 also shows calibration performance as measured by the Brier score (lower is better). Nnet-survival had the best calibration performance at all three follow-up time points, though the differences were fairly small. Calibration was also assessed visually using calibration plots (Fig. 6). Our Nnet-survival model appeared to have the best calibration at the 6 month and 1 year time points, with Cox-nnet and the standard Cox model tending to under-predict survival probability for the best-prognosis patients.
|Model||C-index||Brier score: 6 months||Brier score: 1 year||Brier score: 3 years|
We compared running time of the three neural network models for various training set sizes, with results shown in Fig. 7. Simulated datasets of size 1,000 to 1,000,000 were created by sampling from the SUPPORT study dataset with replacement. Each model was run for 1,000 epochs. For sample sizes of 100,000 and higher, the Cox-nnet model ran out of memory on a computer with 32 GB memory; therefore, for this model running times could only be calculated for sample sizes of 1,000 to 31,622.
We presented Nnet-survival, a discrete-time survival model for neural networks. It is theoretically justified since the likelihood function is used as the loss function, and naturally incorporates non-proportional hazards. Because it is a parametric model, it can be trained with mini-batch gradient descent as the likelihood/loss depends only on the patients in the current mini-batch. This enables fast training, use on datasets that do not fit in memory, and can avoid the network getting stuck in a local minimum of the loss function (Bottou, 1991). This is in contrast to models based on the Cox proportional hazards model such as Cox-nnet (Ching, Zhu & Garmire, 2018) and Deepsurv (Katzman et al., 2018), which require the entire training set to be used for each model update (batch gradient descent). The Nnet-survival model can be applied to a variety of neural network architectures, including multilayer perceptrons and convolutional neural networks.
In our experiments, the model performed well on both simulated and real datasets. It was challenging to find a publicly available dataset that would potentially highlight the advantages of the model. Ideally, such a dataset would have large sample size, predictor data such as images or text that are well-suited to neural networks, and time-to-event outcome data. Since no such dataset was available to our knowledge, we used the SUPPORT study dataset of 9,105 hospitalized patients, which has moderate sample size and time-to-event outcome data, but has low-dimensional predictor data that may not result in a benefit from a neural network approach. For this dataset, our model’s discrimination and calibration performance was similar to several other neural network survival models and a traditional Cox proportional hazards model. In running time tests, its running time was similar to Deepsurv (Katzman et al., 2018) and better than Cox-nnet (Ching, Zhu & Garmire, 2018) for sample sizes >1,000. Interestingly, Cox-nnet ran out of memory for larger dataset sizes, because it stores an n by n matrix where n is the sample size (variable name R_matrix_train in the Cox-nnet code).
While our model has several advantages and we think it will be useful for a broad range of applications, it does has some drawbacks. The discretization of follow-up time results in a less smooth predicted survival curve compared to a non-discrete parametric survival model such as a Weibull accelerated failure time model. As long as a sufficient number of time intervals is used, this is not a large practical concern—for instance, with 19 intervals the curves in Fig. 6 appear very smooth. Unlike a parametric survival model, the model does not provide survival predictions past the end of the last time interval, so it is recommended to extend the last interval past the last follow-up time of interest.
The advantages of parametric survival models and our discrete-time survival model could be combined in the future using a flexible parametric model, such as the cubic spline-based model of Royston and Parmar, implemented in the flexsurv R package (Royston & Parmar, 2002; Jackson, 2016). Complex non-proportional hazards models can be created in this way, and likely could be implemented in deep learning packages.
Our discrete-time survival model allows for non-proportional hazards, can be used with stochastic gradient descent, allows rapid training time, and was found to produce good discrimination and calibration performance with both simulated and real data. For these reasons, it may be useful to medical researchers. | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710926.23/warc/CC-MAIN-20221203075717-20221203105717-00387.warc.gz | CC-MAIN-2022-49 | 30,076 | 81 |
https://forums.kleientertainment.com/hot-lava-bug-reporter/hl-beta-reports/mute-doesnt-work-r5859/ | code | Mute doesnt work
Bug Type: GeneralGame Version: beta 370730CPU: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHzRAM: 8134GPU: NVIDIA GeForce GTX 1080 8079 Direct3D11OS: Windows 10 (10.0.0) 64bit 10.0.18362.0Report from 'playground' position: '(-6.6, 3.3, -65.2)' view: '(1.0, -0.1, 0.1)'
I muted guy and after 5min he started talking again and mute wasnt working :/
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652569.73/warc/CC-MAIN-20230606114156-20230606144156-00583.warc.gz | CC-MAIN-2023-23 | 594 | 8 |
https://codedump.io/share/5VwHa5igjPXt/1/how-to-setup-google-map-in-react-native--facing-errors | code | You're trying to install the newest version of react-native-maps and it's compatible with react-native >= 0.40. Upgrade your react-native or try to install older react-native-maps version. You can do that by running npm install [email protected] --save or even older version - check releases.
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
Email codedump link for How to setup google map in react native ? facing errors | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829542.89/warc/CC-MAIN-20181218164121-20181218190121-00028.warc.gz | CC-MAIN-2018-51 | 474 | 3 |
https://wiki.debian.org/AppArmor/Contribute?action=recall&rev=31 | code | |/Contribute /Contribute/FirstTimeProfileImport /Contribute/MergeProfileFromUpstream /Contribute/Upstream /Debug /HowToUse /OutReachyRound9 /Progress /Reportbug /UserStories|
This page explains how to contribute to AppArmor in Debian.
Interacting with the team
IRC: #apparmor on irc.oftc.net (general AppArmor discussion channel)
How to participate
Ship an AppArmor profile in "your" package
?Import a profile from upstream
?Import a profile from apparmor-profiles-extra package to the package you maintain
?Learn how to package using dh_apparmor
Improve quality of AppArmor profiles
Upstream Debian changes to AppArmor profiles
Import Upstream changes to Debian
Create new profiles
Create or patch profiles: Contribute to Upstream.
Report and triage bugs and/or happiness.
Fix bugs in the packages we maintain
Fix bugs in the apparmor package.
Fix usertagged bugs
Convince Ubuntu to upstream their AppArmor profiles to Debian.
Organize by keeping the progress tracking page up-to-date.
Documentation: improve the documentation about the user side of things. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585281.35/warc/CC-MAIN-20211019202148-20211019232148-00141.warc.gz | CC-MAIN-2021-43 | 1,058 | 21 |
https://www.mail-archive.com/[email protected]/msg29712.html | code | This is an automated email from the ASF dual-hosted git repository.
jiangphcn pushed a change to branch COUCHDB-3326-clustered-purge-davisp-refactor
in repository https://gitbox.apache.org/repos/asf/couchdb.git.
discard b7c40df Fix test failure from test_engine_util:apply_batch/3
new 429dd97 Bug fixes on clustered purge
This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version. This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:
* -- * -- B -- O -- O -- O (b7c40df)
N -- N -- N
You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.
Any revisions marked "omit" are not gone; other references still
refer to them. Any revisions marked "discard" are gone forever.
The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails. The revisions
listed as "add" were already present in the repository and have only
been added to this reference.
Summary of changes:
src/chttpd/src/chttpd_db.erl | 2 +-
src/couch/src/couch_bt_engine.erl | 2 +-
src/couch/src/couch_bt_engine_compactor.erl | 4 +--
src/couch/src/couch_db_updater.erl | 3 +-
src/couch/src/test_engine_compaction.erl | 4 +--
src/couch/src/test_engine_util.erl | 10 ++++---
src/couch/test/couchdb_compaction_daemon_tests.erl | 1 -
src/couch/test/couchdb_views_tests.erl | 2 +-
src/couch_mrview/src/couch_mrview_index.erl | 35 ++++++++++------------
.../test/couch_mrview_purge_docs_tests.erl | 2 +-
10 files changed, 32 insertions(+), 33 deletions(-)
To stop receiving notification emails like this one, please contact | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946077.4/warc/CC-MAIN-20180423144933-20180423164933-00589.warc.gz | CC-MAIN-2018-17 | 1,810 | 34 |
http://www.azquotes.com/quote/413699 | code | Egypt is the First Nome. New York is the twenty-first. What’s the last one, the Three-hundred-and-sixtieth?” “That would be Antarctica,” Zia said. “A punishment assignment. Nothing there but a couple of cold magicians and some magic penguins.” “Magic penguins?” “Don’t ask. | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866191.78/warc/CC-MAIN-20180624024705-20180624044705-00599.warc.gz | CC-MAIN-2018-26 | 293 | 1 |
http://www.digitalfutures.manchester.ac.uk/events/events/herc-seminar-efficient-high-dimensional-disease-outcome-prediction-in-heterogeneous/ | code | HeRC seminar: 'Efficient High-Dimensional Disease Outcome Prediction in Heterogeneous
Time: 12:00 - 1pm
Venue: The Congregation, Vaughan House, Health eResearch Centre
Sorry, this event has now ended.
The increasing availability of both detailed clinical information and in some cases, high-dimensional molecular data has led to a renewed interest in precision medicine. One of the hopes is to be able to predict disease outcomes based on a patient's molecular and clinical profile. This requires new statistical and machine learning methodology to build high-dimensional predictive models that can deal with the inherent heterogeneity of many diseases. For example, in the neurodegenerative disease Amyotrophic Lateral Sclerosis (ALS), it is well known that some patients exhibit rapid progression, while others progress very slowly. However, the causes underlying these differences are unknown.
We present a principled method for modelling prediction in heterogeneous settings, based on information sharing across penalised linear regression models. This allows us to build prediction models that reflect the heterogeneity of the population, while at the same time leveraging the commonalities between groups of homogeneous samples. We present two related approaches, using l1 and l2 fusion penalties on the model-specific parameters, and show in extensive simulation studies that these approaches outperform both naive pooling and complete separation of models in realistic scenarios.
We then apply our method to three datasets: 1. gene expression and drug sensitivity data from The Cancer Cell Line Encyclopedia (Barretina et al. 2012), 2. clinical trial data from ProACT, a database of ALS patients, and 3. GWAS mutation data from the ADNI Alzheimer's database. Using these datasets, we demonstrate that out method improves on other popular prediction approaches, as well as being highly computationally efficient. Additionally, we show that our model can be used to identify biomarkers for disease progression, and to identify common features between otherwise disparate population groups.
Booking: All welcome. No need to book, just turn up. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587623.1/warc/CC-MAIN-20211025030510-20211025060510-00145.warc.gz | CC-MAIN-2021-43 | 2,148 | 8 |
https://service.aeb.com/hc/en-us/articles/4403782573713-Defining-conditions | code | Go to License Management − Licenses and open a license.
Switch to the Conditions sheet.
Enter the desired information.
In the Condition field group you have two options for choosing the time of the first e-mail reminder of a condition: Reminder (days) and Reminder (from date). The Reminder (days) option requires that you define the export date in the associated approvals. The first reminder is then sent out at the indicated number of days after the export date. The Reminder (from date) option lets you choose a fixed date independent of the export date.
Maintaining settings for reminder of conditions: Switch to the Monitoring sheet. In the General settings field group, enter the interval (days) and e-mail addresses.
Interval (days) indicates the number of days after which periodic reminders are sent out. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00270.warc.gz | CC-MAIN-2021-43 | 816 | 6 |
https://www.reddit.com/r/politics/comments/g3prw/att_inadvertantly_and_beautifully_articulates/ | code | AT&T inadvertently and beautifully articulates rationale for a progressive income tax.
Context: Press release
We (The government) are committed to providing a great experience (life) for all of our Internet customers (citizens). Less than 2 percent of our Internet customers could be impacted by this approach - those who are using a disproportionate amount of bandwidth (a disproportionate amount of money). We will communicate early and often with these customers so they are well aware of their options before they incur any additional usage charges (taxes).
The top 2 percent of residential subscribers uses about 20 percent of the bandwidth (wealth) on our network (planet). Just one of these high-traffic users (multi-billion dollar corporations and individuals) can utilize the same amount of data capacity (capital) as 19 typical households. Lopsided usage patterns (Lopsided investment schemes) can cause congestion (Terrible poverty), at certain points in the network, which can slow Internet speeds (Create a global recession) and interfere with other customers' access to and use of the network (Create a global credit market crash). Our new plan addresses another concern: customers strongly believe that only those who use the most bandwidth should pay more than those who don't use as much(Those who make more money should pay more taxes). That's exactly what this does – and again, 98% of our customers (98% of our citizens) will not be impacted by this.
Edit: Yes I'm aware it's not a perfect analogy, but its too close to ignore. Also, noticing a lot of people throwing in words they think are more apt. Do more of that, will edit in the more precise ones as the day goes on. | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983580563.99/warc/CC-MAIN-20160823201940-00116-ip-10-153-172-175.ec2.internal.warc.gz | CC-MAIN-2016-36 | 1,695 | 5 |
http://hybrid-graphics-linux.tuxfamily.org/index.php?title=Bumblebee&oldid=501 | code | Bumblebee allows you to run specific programs on the discrete graphic card, inside of an X session using the integrated graphic card.
It's written by Martin Juhl and published under GPL v3.
What it does - and does not
Actually only Ubuntu, Debian, OpenSuSE and Fedora is supported by this script, but Martin is working on to porting on other distros and make his script more distro-independent. For info about state of work in distro porting or to help in porting please see https://github.com/MrMEEE/bumblebee/issues/7
Arch and Gentoo are support by:
Arch Linux - ArchWiki: https://wiki.archlinux.org/index.php/Bumblebee
Gentoo - Iegor: https://github.com/iegor/bumblebee-Gentoo-support
Currently this application does shut down the nvidia card when not used, but only on Ubuntu.. Other distribution are in progress.. still balance the load between the two cards is missing..
The installer installs the closed-source nvidia driver.
git clone https://github.com/MrMEEE/bumblebee cd bumblebee sudo ./install.sh
Installing on Ubuntu
If you're using Ubuntu, you can install Bumblebee from a PPA. Make sure that you've removed previous versions and the x-swat PPA containing the latest nvidia driver:
sudo bumblebee-uninstall sudo apt-get install ppa-purge sudo ppa-purge ppa:ubuntu-x-swat/x-updates
Next, proceed with installing bumblebee from Martins PPA.
sudo apt-add-repository ppa:mj-casalogic/bumblebee sudo apt-get update sudo apt-get install bumblebee
You can compare display performances with and without bumblebee :
glxgears optirun glxgears
Note: when running just glxgears, if you see this message:
Running synchronized to the vertical refresh. The framerate should be approximately the same as the monitor refresh rate.
The FPS value will always be close to 60. Instead you should do:
To Reconfigure bumblebee :
To uninstall or clean up ealier versions :
Use the bumblebee-bugreport script to create a nice bug report that you'll send here. | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864572.13/warc/CC-MAIN-20180521235548-20180522015548-00532.warc.gz | CC-MAIN-2018-22 | 1,949 | 23 |
https://oscarvgg.com/2018/04/12/color-bombs/ | code | Apr 12, 2018
This is a game I built using unity. I did the design myself. The language I used to code it was C#.
Released on Google Play and Amazon App Store. Coming soon to the Apple App Store.
- Amazon Game Services
- Google Play Game Services | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510942.97/warc/CC-MAIN-20231002001302-20231002031302-00615.warc.gz | CC-MAIN-2023-40 | 245 | 5 |
http://www.physicsforums.com/showpost.php?p=3541082&postcount=6 | code | Google Image search is amazingly accurate
View Single Post
Oct5-11, 02:16 PM
(Google image search)
Choose an image:
Then drag it into the search box.
I don't understand how to get it from your post to google. | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657123996.28/warc/CC-MAIN-20140914011203-00303-ip-10-196-40-205.us-west-1.compute.internal.warc.gz | CC-MAIN-2014-41 | 208 | 7 |
https://community.zapier.com/code-webhooks-52/why-can-t-i-feed-by-webhook-payload-into-subzap-27648?postid=114937 | code | We use Paddle for customer billing, and I want to create a variety of Zaps that send notifications to the team when users do various things like upgrade, cancel, lapse on payment, etc.
We have a limited number of endpoints we can use in Paddle (only 5), so I can only use one for Zapier. My thought was I create a Zap that triggers by receiving any webhook from Paddle, and then using paths I can branch it out and do stuff based on the action (for example, alert_name = subscription_cancelled goes down one path, subscription_payment_failed goes down another path). Each path is quite involved, with steps to look people up in Hubspot and Notion, so I was going to send these out to Sub-Zaps to keep it clean.
I got this branching stuff working great. The issue is, when I try to feed in the output of the webhook to the subzap Input Argument: Webhook data, it doesn’t let me. It literally doesn’t let me click. When I click into the Insert Data pop-down, it shows “1. Catch Hook in Webhooks by Zapier” but when I try to click into that, it just closes again.
Strangely, I CAN use the Webhook variables in the branching logic. So why am I not able to feed these items into my subzap!? | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00864.warc.gz | CC-MAIN-2023-50 | 1,193 | 4 |
http://creators.ning.com/forum/topics/anyone-know-how-to-get-a-complete-list-of-tags-for-a-ning-site?xg_source=activity | code | I once knew how to do this but have forgotten and lost my instructions to self. What i am wondering is if anyone has discovered a way that one might go about retrieving a complete list of all tags used on a Ning network, including photos, forums, videos, groups, etc.
If anyone out there has some insight into this it would be much appreciated.. ive spent 3 hours today trying to find my notes and i belive they are on another hard drive...
I've already tried searching google with allinurl: by the way
Drill down and Export a Google Analytics report.
Set your reporting period nice and long. | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705976722/warc/CC-MAIN-20130516120616-00043-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 592 | 5 |
https://forums.civfanatics.com/threads/trading-posts-in-native-villages-similar-to-missions-implemented.654934/ | code | Base Idea: As most of you definitely know, "Missionaries" can set up "Missions" in Native Villages. We could however also implement that "Traders" can set up "Trading Posts" in Native Villages using the same pattern. Trading Posts would be used by private traders and Natives to conduct business. For that service they will pay you a part of their profit (in gold). I would like to keep this simple and would not try to replace "Trade with Wagon Trains" or "beam" goods. Thus Trading Posts would simply generate a small gold amount every turn and every couple of turns (e.g. 15) a Treasure Unit would be spawned with that gold amount. Base Mechanic Profession "Trader" (which might e.g. need 100 Trading Goods to be equipped) that can set up Trading Posts in Native Villages Expert "Seasoned Trader" which has higher success chances to do so and creates more efficient Trading Posts (they can be acquired by Immigration from Europe or Python Events) Setting Up a Trading Post being done by moving the Unit to the Village and use the "Set Up Trading Post" Button There will be success and failure chances for setting up a Trade Poste depending on Attitude of then Native Nation and some base settings. In the process of setting up the Trading Post the Unit would be consumed. Setting Up Trading Posts will slightly improve your relations to the Native Nations as well. When a War is started between that Native Nation and you, Native Nation will burn down all your Trading Posts in its villages. A Trading Post of another Nation can be replaced by successfully setting up your own. (There can only be one per Native Village) Visualization: A "Trading Post" icon would be displayed on Billboard of Native City - similar to "Mission Cross" Instead of spawning "Converted Natives" it will spawn Treasure Units with gold (see below) Gold calculation / balancing for Treasuer Units The gold amount per turn generated by a Trading Post will be relatively small. It will not make Wagon Trains useless. The gold amount per turn will depend on Population of Native City and Attitude of the Native Nation. If the Trading Post was set up by a "Seasoned Trader" it will also produce more gold. All of the base settings for balancing will be in GlobalDefinesAlt.xml There will also be a small random factor in the logic. Gameplay It might not be the most innovative or complex concept ever but I think that it would still fit. This just gives another small possibility and incentive to interact with Natives peacefully. However efforts and risks are relatively low. It is also not complicated to understand if you already understand "Missioning". Maybe we could also implement some additional Python events for flavour. Effort and Risks AI already knows how to use Missions, we can teach it Trading Posts as well. Efforts are not unreasonable and there are very low risks. We will need new graphics (Profession, Expert and Billboard Icon) I am looking forward to your feedback and suggestions. | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00513.warc.gz | CC-MAIN-2021-25 | 2,979 | 1 |
https://www.telerik.com/forums/how-to-create-flip-effect | code | Hi Paresh Sen,
You can flip any set of controls using the new PlaneProjection class provided with Silverlight 3.
Here are some links that you could use as a reference:
Silverlight 3.0 - 3D Flip Action using Plane Projection
Silverlight 3 Tutorial - PlaneProjection and Perspective 3D
the Telerik team | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/warc/CC-MAIN-20190215183319-20190215205319-00113.warc.gz | CC-MAIN-2019-09 | 300 | 6 |
https://sbdevel.wordpress.com/2009/02/26/code4lib-day-2/ | code | (Notes by Mads and Jørn, showeled in by Toke, let’s hope we find the time for proper edits later – we’re running very fast here, but it’s great)
Keynote: Sebastian Hammer
Journals and books will disappear faster than we think.
Maybe we don’t need to follow market forces – sometimes they fail. We need to do more of the boring stuff – ie. less code, more standards.
The libraries need to be more proactive.
What (local) libraries do well:
- Preservers of cultural heritage
- Conveyers of authoritative information
- Supporters of learning and research
- Pillars of democracy
Libraries need to be the more open alternative to the commercial players – ask questions, put up a fight.
It needs to be a lot easier to put systems together – standards are needed for collaboration.
Systems and organisations need to surrender data freely.
The Open Library Environment
Lifting the ILS up to be a more integrated part of existing system – ie. if there is already an aquisition system, then talk to that, integrate with existing single sign on solution. The is no code yet from the Open Library Environment, but many plans for architectural components.
OLE is defining the core Service-Orientede architecture – but are very interested in feedback, and for that they have put up a survey
A running system is 2 to 3 years away.
Blacklight as a unified discovery platform
“Yet another next-generation catalog”. Very much about serendipity. A lot of the “aha”-moment come when you are browsing – and that is being taken away from the users by just giving them the exact electronic versions, and there is currently nothing there to replace this serendipity.
Solr as a backend, so it has relevancy ranking, facetting, unicode support, etc. All of which are great for the user experience. Blacklight also has permanent URLs, RSS-feeds, and more. Allows RESTful access to their data to make it easier for other people to do mashups.
No single interface can fit all – ie. chemistry students have different need than students from the music department. So they have made specific search
interfaces available to help answer the common questions posed by the different, for example searching by musical instruments. Create new fields when indexing to facilitate searching, for instance based on the year the music was composed they create groups of genres.
Blacklight also contains all the data from their Fedora installation using gsearch so they get live updates to their Solr index.
Items from collections have different behavior, ie. a scanned picture is displayed differently than a scanned book.
Cooperates with Vufind to index marc in Solr (solrmarc), and also do a lot with marc4j. Standardizing on jangle.
A new platform for open data – biblios.net web services – Galen Charlton (LibLime)
- libraries agree with principles
- not all have the staff
- sell sevices
open data – the final frontier
- libs provide licences but not open dt th cm
open library project
open data commons licence
interface libs for UI’s
- free browserbased cataloging service
- webservices – to interact with biblios.net – push/pull records
- push records to create standardized access
- support for SRU/OpenSearch search
- multiple formats (XSL transformations) MARC/dc/mods
Extended biblios – the open source web based metadata editor – Chris Catalfo
- implementing plugins to extend biblios.
- loads any plugin defined in biblio’s config file
- example – create editor plugin
- example – adding Extjs/documents using CouchDB as backend
- example – listening to biblios.app events
What we talk about when we talk about FRBR – Jodi Schneider
We talk about different things when talking about FRBR
if I refer to a book – different place but no connonical version. We would like one place. Same with lib catalogue – can i get it. FRBR as obout relationships.
Weak idea of FRBR:
- group manifestation
- work – manifestation relation
- work state grouping – xISBN/LibraryThing service (group manifestation service)
- status of FRBR work set grouping
There is more to it
- work set grouping does not say anything about contents
Less weak FRBR:
- open library –> example of work identifier (instead of just grouping)
- we must have items and expressions related back to work and manifestation
example: LC FRBR display tool
- beyond just works and manifestion
example: VTLS catalogue
- collections by works/manifestations (all of author)
- figure /group 1&2&3 enteties (the complete FRBR notion)
- connect and remix, link up with other systems representaion of FRBR i RDF (IFLA, Davis&Newman), LIBRIS linked data (FRBR related tag), id.loc.gov,
We are going for a (re)usable biblioegraphic universe!
What can we do?
- demand strong FRBR
- build linked data (freebase)
- create the algorithms – share under open license
Toke held a good and very technical (read super geeky) talk on Summa’s faceting system. Of course some people didn’t understand a word he said (thats not nescessary a bad thing) but those who did get it were very excited. Toke was a very ‘in demand’ person after his talk. We are looking forward to seeing which opportunities this will bring.
The Rising Son – making the most of Solr
- java is not slow, measure.
- omitNorms, omitTf
- “it depends”
- …DIH, Solr Cell, CSV, LuSql…..
- SOLR ruby mapper
Solr as IR toolkit
- frontend to Lucene
- geo searching, submit lat/long queries
- Solr 1.4 perfomance – dramatically advance in performance. Multi select facets.
- “the interface is the app”. Abanded the bottom up (data to app). Think app. first and bring it down (app to data).
SolrJS library – standard AJAX components
Lucid articles – tutorials – podcasts – blog
FreeCite – An open source free-text citation parser
example – Brown searchable database – no citation links
first step is parsing the data to the relevent fields. FreeCite handles this.
FreeCite has a webservice API.
What does FC do?
response: OpenURL – ContextObjetct and JSON. “This is rocket surgery” –> the data is not clean (would be trivial if the data was wellformed). E.g. letters in volume, title and jounal name after each other.
Great facets, like your relevance, but can I have links to Amazon and Google Book Search – squeezing more out of the OPAC
Goldrush discovery UI, a lot of examples. “But we want more!”. Lots of individual hacking..no similarity and that gives low reuse. We need component like extension. The answer is JUICE. Examples of contents pulled in via Iframes, oplæsning, fadedown integrated on the web site (greasemonkey way).
JUICE (supported by Jquery) –> metadefs –>panels –>extension
– a few JS lines ingested to the page
- easy to copy/paste
- easy to create new by modifying
- shares OPAC specific knowledge
- very little product specific dependencies
- open source
Freebasing for fun and enhancement – Sean Hannan
REST, api.freebase.com. Returns JSON.
Example: Acedemy ward winners: FreeBase schemas defines the fields. Acre – templating. Subquering – open a new JSON block and query again. | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647044.86/warc/CC-MAIN-20180319175337-20180319195337-00683.warc.gz | CC-MAIN-2018-13 | 7,110 | 101 |
https://databasecamp.de/en/data/star-schema | code | The star schema describes the arrangement of database tables, which should be as memory-efficient and powerful as possible. As the name implies, the tables are arranged in a star shape in a so-called fact table, which is surrounded by several so-called dimension tables.
What is the structure of the scheme?
With large amounts of data stored in databases or the data warehouse quickly becomes confusing and queries are not only complicated but also take a relatively long time. Therefore, intelligent ways are needed to create tables so that memory can be saved and queries can take place more quickly.
The first approach to this is the star schema, which includes star-shaped table structures. A distinction is made between facts and dimensions:
- The facts are key figures or measured values that are to be analyzed or illustrated. They form the center of the analysis and are located in the central fact table. In addition to the key figures, this also consists of the keys that refer to the surrounding dimensions. In the business environment, facts are, for example, the sales quantity, the turnover, or the incoming orders.
- The dimensions, on the other hand, are the properties of the facts and can be used to visualize the key figures. The different levels of detail of the dimensions are then stored in these and thus memory can be saved since the details only have to be stored once in the dimension table. Dimensions in the corporate environment are, for example, customer information, the date of the order, or product information.
The star schema deliberately omits normalization, which is normally an important concept in database theory. The third normal form is namely violated with a star schema. On the other hand, the structure is particularly efficient and provides fast answers even for complex queries.
What is the Normalization?
When data is stored in databases, this can quickly become very confusing and redundant. Therefore, when creating a database schema, one should think about how redundant information, for example, duplicates, can be avoided.
Normalization is a sequence of different steps that are intended to prevent more and more avoidable redundancies. For this purpose, there are the so-called normal forms, which build on each other and have increasingly strict rules. For the star schema, only the first three normal forms are interesting, since a database in the star schema only fulfills the first two normal forms, but not the third:
- A database is in 1st normal form when all attributes/columns have only a single value. That is, there is no accumulation of values in any field.
- A database is in 2nd normal form when every attribute in the table is fully dependent on the primary key. This also means that all attributes that do not depend on the primary key must be swapped out to a separate database table. Of course, a database that is in 2nd normal form must also satisfy 1st normal form at the same time, since they build on each other. The same applies to the subsequent normal forms.
- A database is in 3rd normal form if no attribute that is not a primary key of the table does not refer to another non-key attribute. If this is the case, a new relation, i.e. a new table, must be created for it.
The star schema misses the third normal form in most cases because the dimension tables often contain several attributes that are not primary key attributes but nevertheless refer to each other. In the dimension table “Products”, for example, the price can be determined by the combination of the “Product name” and the “Color”, although neither the product name nor the color is a primary key attribute.
What are the advantages and disadvantages of the scheme?
Although the arrangement of tables as a star schema does not meet the requirements of normalization, since the third normal form is not given, it has some advantages that make it very popular in practical applications:
- The arrangement in the star schema is optimized for a high query load and thus offers the possibility to process even complex queries efficiently.
- Furthermore, by deliberately omitting the third normal form, unnecessary join operations are not necessary for most queries.
- By the arrangement in the star schema, a majority of the arising redundancies are avoided. This also leads to the fact that the dimension tables require comparatively little storage space and thus large amounts of data volume are saved.
- The star schema is a very understandable arrangement of relations in many applications since the division into fact and dimension tables is very intuitive and comprehensible.
However, there are also use cases in which the use of the star schema is not optimal, for example, when the dimension tables become very large and there are frequent queries on these tables. Then the query times can deteriorate significantly. In addition, as already mentioned, there may be redundancies in the data. Therefore, in addition to the star schema, a second database schema has been formed, which is intended to remedy the disadvantages.
What aspects should be considered when implementing a star schema?
The implementation of a star schema requires some considerations to ensure optimal performance and efficient querying. Some important implementation considerations are:
- Denormalization: In a star schema, the fact table is denormalized, which means that redundant data is stored in the table. This reduces the need for joins when querying the data and improves query performance.
- Data types: Choosing appropriate data types for the fact and dimension tables is important to ensure efficient storage and retrieval of data. The data types used should reflect the nature of the data being stored.
- Indexing: Creating appropriate indexes on the fact and dimension tables is crucial for fast querying. The indexes should be based on the queries that are commonly run on the data.
- Partitioning: Partitioning the fact table into smaller chunks based on time periods or other criteria can improve query performance by reducing the amount of data that needs to be scanned.
- Aggregation: Pre-aggregating data can improve query performance by reducing the amount of data that needs to be processed. Aggregation can be done at the fact or dimension level.
- Query optimization: Query optimization is important for ensuring that queries are executed efficiently. Techniques such as query rewriting materialized views, and caching can be used to improve query performance.
Overall, careful consideration of these implementation factors is critical for the successful implementation of a star schema and optimal performance when querying the data.
What is the Snowflake scheme?
The so-called snowflake scheme is a further expansion stage of the star scheme with the goal of completely normalizing the tables and thereby circumventing the disadvantages of the star scheme to a certain extent. The structure of snowflake results, in short, from the fact that the dimension tables are broken down and classified even further. The fact table, however, remains unchanged.
In our example, this could lead to the dimension table with the delivery addresses being further classified into country, state, and city. This normalizes the tables and the third normal form is also fulfilled, but this is at the expense of further branches. These are particularly disadvantageous in the case of a later query since these must be reassembled with complex joins.
The further branching thus leads to the fact that the data is stored less redundantly and thus the amount of data is reduced again in comparison to the star schema. However, this is at the expense of performance, since the dimension tables have to be merged again during the query, which is often very time-consuming.
Starschema vs. Snowflake-Schema
The Star schema and the Snowflake schema are relatively similar in structure and are often compared with each other for this reason. In fact, the choice of a suitable database schema depends mainly on the concrete application.
In short, the goal of the star schema is to provide a good basis for frequent queries and still reduce the amount of data. This is created by splitting into fact and dimension tables. This allows many redundancies to be removed and the first two normal forms to be satisfied. The number of tables remains relatively small and thus queries with few joins and fast response times are possible. However, complete normalization of the database cannot be performed and some redundancies remain.
The snowflake schema, on the other hand, is a further development of the star schema with the aim of bringing about a normalization of the database. The fact table is retained and the dimension tables are further classified and divided into additional relations. Although this eliminates the remaining redundancies of the star schema, it makes queries slower and more time-consuming, since the dimension tables must first be merged again.
This is what you should take with you
- The star schema is a database schema that is used to enable the most efficient database queries.
- For this purpose, the original data is divided into the so-called fact table and several dimension tables.
- Although the star schema already eliminates many redundancies, some information is still stored twice. This is another reason why the star schema does not meet the requirements of normalization.
- A further development of the star schema is the so-called snowflake schema, which divides the dimension tables again into finer relations. However, this is at the expense of query performance.
Other Articles on the Topic of the Star Schema
Microsoft wrote a very interesting post on star schema and what it means for their business analytics platform Power BI. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100056.38/warc/CC-MAIN-20231129041834-20231129071834-00169.warc.gz | CC-MAIN-2023-50 | 9,802 | 45 |
https://educommunity.minecraft.net/hc/en-us/community/posts/4405057585812-How-to-Execute-code-from-Agent-s-position-?page=1#community_comment_4405069945748 | code | I've been searching for how to do this but so far have not found any threads about it.
I have set up a code that I can use to create a color coded build battle arena. It is positioned over a carefully placed underlay of border_blocks, allow blocks and deny blocks so I want to make sure that each time I set the arena and raise/lower the walls, that it happens from where my agent is positioned in the middle.
I know I can do it by standing right in the middle before I run the codes but it is so easy to be off by a bit and really mess things up. It would be so much easier if I could some how place the agent where I want the code to run from (that way I can be moving around when it is time to raise or lower the walls)
I thought perhaps the execute as might work but it only seems to be able to say something, doesn't seem to have an effect on the rest of the blocks following it.
any tips on how to accomplish this?
I don't want to have to code hard world coordinates into it as I want to make it possible to build the structure anywhere in the world and be able to use the file in any world.
Please sign in to leave a comment. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101779.95/warc/CC-MAIN-20231210092457-20231210122457-00252.warc.gz | CC-MAIN-2023-50 | 1,132 | 7 |
https://stevevincent.info/CAP211_37.htm | code | CAP 211 -
Interactive Design and Game Development
Chapter 37 - Simulating Physics-Based Motion with reactor
This lesson discusses reactor. Objectives important to
- The reactor utility
- Using collections
- reactor modifiers
- reactor objects
The chapter begins with an introduction to reactor, a built-in utility for 3DS Max. reactor (it is never capitalized) is a physics simulator. It helps model the physical interactions of objects and forces. Like the last chapter, we are now in a world in which objects can have hard surfaces. This is different from the models we have made previously that could easily pass through one another.
The text explains that reactor is accessible from the Utilities panel in 3DS Max, and it is also accessible from the reactor menu on the menu bar. You may hear the words reactor and Havok used interchangeably: Havok is the company that wrote the reactor program. 3DS Max 9 comes with the reactor version 1 (Havok 1) and reactor version 3 (Havok 3) engines. Note that only the reactor version 1 engine is suitable for soft objects like cloth and rope.
As usual, the text describes many rollouts and parameters associated with this feature. The big picture this time is the settings describe objects to the reactor engine, allowing it to cause the objects to be affected by the laws of physics. ("Ye canna change the laws o' physics, laws o' physics, laws o' physics,
Ye canna change the laws o' physics, laws o' physics, Jim.")
Tutorial Notes and Questions
Tutorial 1 (Filling a glass bowl):
(Warning: The last step in this sequence takes a long time to complete.)
- Open the indicated file.
- Create a new reactor concept: Rigid Body Collection. Tell it you want to add objects to the collection.
- Select and add all objects in the scene to the collection.
Question 1: Why all the objects, and not just the marbles that will fill the bowl?
- Follow the instructions for the bowl.
Question 2: What does unyielding mean here?
- Set the mass for the marbles as instructed.
- Preview the animation through reactor.
Question 3: Why can't you just scrub the timeline at this point?
- The text says to have reactor compute keyframes if everything looks fine.
Question 4: What would you change if it did not look fine?
In the tutorial above, you used a collection. The text goes on to describe other collection types. As you will have noticed, adding an object to a collection give the object physical properties from the collection.
- rigid body collection - resists force (solid objects)
- cloth collection - bends to all forces (like normal cloth, not chain mail)
- soft body collection - flexes to force (think of a teddy bear or a Nerf ball)
- rope collection - supports tension (you can pull it), does not support compression (you can't push it)
- deforming mesh collection - deforms (changes shape) when acted on by forces (think of silly putty). Used for character meshes.
The text notes that the flexible types (everything except rigid) above are only as flexible as their number of segments allows. Think of the lines between segments as being like hinges: more of them makes it easier to bend the object, or to bend a small portion of it. Think how much more flexible a finger would be if it had twenty joints in it, instead of the default one on a biped. In the same way, imagine throwing a blanket over a chair. If the blanket has four segments (2 by 2), it may as well be aluminum siding. If it has 400 (20 by 20), you get a much better drape of the fabric.
The text also advises us that we can add an object to multiple collections, but this will generate a warning when we preview the animation. Take this as an indicator that you usually want to include an object in just one collection.
The text also explains that we can't add an object to a cloth, rope, or soft body collection unless the object has an appropriate modifier showing that it belongs in such a collection. The modifier will have the same name as that type of collection. Instead of adding it to the object in the modifier list, select the object, open the reactor menu, and choose Apply Modifier. The modifiers have a purpose: they give you access to the vertices of the objects they are applied to, enabling you to modify reactor properties by vertex. They also add properties that relate to the type of modifier:
- cloth modifier properties - mass, friction, relative density, air resistance
- rope modifier properties - mass, thickness, friction, air resistance, spring (bungee cord)
- soft body modifier properties - mass, friction, stiffness, damping
Most modification of reactor properties will be done through the individual item, not the collection, although the reactor Property Editor will let you set the properties for several items together. Some properties to be aware of:
- mass - The text says this sets the heaviness of an object. More properly, it sets the density, the relative mass per volume, and the inertia, the resistance to force, of the object. Since the author tells us to study physics, we should know that mass and weight are not the same thing. An object in flight or free fall will still have mass and inertia, but its weight will be unimportant. Follow the link in the last sentence for a high school review of this physics concept.
- friction - The object's resistance to slipping and sliding.
- elasticity - The relative bounciness of the object.
- inactive - This means that reactor will not consider the object when solving the scene.
- disable all collisions - The object will not collide with other objects.
- unyielding - The object will not move in the scene.
- phantom - The object does not affect other objects.
Settings for the reactor version 3 only:
- shell - How far away from the object can we be and still have a collision?
- penetration - How far are objects allowed to penetrate this object? Setting this speeds up the solution.
- quality - How important is the object to the animation?
- Debris - low importance
- Moving - medium importance
- Critical - high importance; should not penetrate other objects
- Bullet - fast objects (not related to penetration)
The text brings up the concept of whether an object is concave or convex. Its definition is not clear. In general, convex objects curve out and concave objects curve in, but that's not good enough for a test. A satellite dish is concave because it has a big dimple, but that does not give you a valid test, either.
- Think of a sphere. You can connect any point in or on the sphere to any other point in or on the sphere with a straight line that does not leave the sphere. This makes it convex. The same test works for a cube, so to 3DS Max, a cube is convex as well.
- Think of a bowl. There are many points in and on the bowl that when connected by straight lines, the line is outside the the body of the bowl. For instance, consider the points connected by the arrow on the cereal bowl on the right. The arrow goes through open space that is not part of the bowl. This makes the bowl concave.
Why do we care? First, convex objects render faster, so we should use them when possible. Second, you want to know whether an object is concave or convex in order to set a collision boundary as described on page 900. Doing so will also speed up rendering. You probably care about rendering speed if you did the tutorial above which tied up your computer for an hour.
Tutorial 2 (Throwing a shirt over a chair):
- This tutorial reinforces the ideas presented so far. Open the file indicated in this step. Note that you have a floor, a chair, and a shirt in the scene.
- Follow the instructions to create a rigid body collection, and to add the floor and chair to it. Note that you do not have to add modifiers to the floor or chair to add them to this collection.
- Follow the instruction to make the floor and chair unyielding. Note the instruction to leave rigid body collector mode before starting the next step.
- Add a reactor cloth modifier to the shirt as instructed. Note that this is required in order to add the shirt to a cloth collection.
- Follow the instructions to use a new method: create a cloth collection, but do it with the shirt already selected to automatically add the shirt to the collection.
- As you did in the last tutorial, bring up the preview window, wait, and then press P to preview the animation. (Don't get in a hurry.)
- If the last step worked, follow instructions in this step to create the animation. Again, it will take a while to complete.
The text turns to creating reactor objects. Nine reactor object types are listed on page 902. I found the text's lack of a definition for "dashpot" irritating. The article on Wikipedia is more enlightening. It may be easiest to think of the device on a screen door that prevents the door from slamming shut as an example of a a dashpot.
- spring - links two objects, a parent and a child, to pull them toward each other
- plane - presents a surface that rigid body objects cannot penetrate
- linear dashpot - links a parent and a child, acts like a shock absorber between them
- angular dashpot - links a parent and a child, acts like a shock absorber between them
- motor - spins rigid body objects
- wind - applies wind force to the scene; various setting simulate a variable wind
- toy car - applies to body and wheel objects in a scene to make them act like a car
- fracture - allows reactor objects to break and blow up
- water - represents a reactor version of water in the scene
Tutorial 3 (Driving a monster truck over a hill)
Tutorial 4 (Smashing a gingerbread house)
Tutorial 5 (Working with water)
- Pick one of the three tutorials above.
- Work together with at least one other student to do the tutorial.
- Turn in notes for the tutorial along with two questions and answers about it based on your observations. | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660070.15/warc/CC-MAIN-20190118110804-20190118132804-00067.warc.gz | CC-MAIN-2019-04 | 9,824 | 82 |
https://e1commerce.com/items/magento-or-shopify-which-is-best-for-integration-with-python-or-which-provide-be | code | Magento or Shopify which is best for integration with Python or which provide best APIs for Python
I am a Python developer. I wanted to create some online shopping stores which will be fully customized, Database will Mysql, Redis for caching, mail-gun for mailing, AWS hosting and Theme may be customized. I am confused in both platforms Magento and Shopify. Please Help Which have the best integration with python.
Yes Rehan is correct to go with the magento framework
Magento is better than Shopify for large scale stores with huge catalog and magento is an open-source platform that gives user significantly greater adaptability as he can transform it the way you need to.
Magento definitely has the advantage when it comes to capabilities and flexibility. While Shopify is hosted by the company, and Magento is a self-hosted solution can be Host on AWS or any other hosting service.
As frontend themes in Shopify are proprietary, they don’t allow lot of tweaking. Beyond changing colors and font, Shopify users are not able to fully customize themes, creating a possible issue for store branding while Magento themes come quite customize-able though with a developer expertise needed in most cases.
You can probably get better answers here: https://www.quora.com/What-are-the-pros-and-cons-of-Magento-vs-Shopify (https://www.quora.com/What-are-the-pros-and-cons-of-Magento-vs-Shopify)
Didn't find the answer?
Our community is visited by hundreds of Shopify development professionals every day. Ask your question and get a quick answer for free.
Find the answer in similar questions on our website.
Write quick answer
Do you know the answer to this question? Write a quick response to it. With your help, we will make our community stronger. | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710926.23/warc/CC-MAIN-20221203075717-20221203105717-00741.warc.gz | CC-MAIN-2022-49 | 1,750 | 12 |
https://docs.microsoft.com/en-us/archive/blogs/dphill/the-application-architecture-guide-2-0-is-here | code | The Application Architecture Guide 2.0 Is Here!
The Application Architecture Guide 2nd Edition is finally available! The complete guide is hosted on MSDN here. We’ve sent the guide off to MS Press for printing, so the printed version should be available from your favorite bookstore soon.
Since the final draft of guide was published on CodePlex early this year, we’ve been busy driving the guide through a final set of reviews and polishing the content. Based on community and p&p team feedback we’ve restructured the guide to make it easier to read and understand. We’ve also removed some of the duplication and made it easier to find your way around so you can more easily find the information that’s relevant to you. We’ve also incorporated a lot of the feedback that we’ve received from the community on topics like Domain Driven Design and Cloud Computing. It’s taken us a little longer than we thought, but we hope you’ll agree that the wait was worth it.
Of course, there are so many elements that go into producing a guide of this scope that it clearly represents a team effort. In particular, the many members of the community and the p&p team who brought their technical expertise and experience to the guide deserve a special mention, along with our esteemed writer Alex Homer and the rest of our content team who drove the guide through the edit, layout, indexing, and publishing phases.
What’s next for the guide? Well, that’s up to you! We designed the guide so that it could be more easily updated; for new architectural techniques and approaches, for new technologies, and for new application types and scenarios. For example, we could add a Windows Azure application archetype, or update the technology matrices for the .NET 4.0 wave of technologies. In order for us to prioritize where we go next with the guide we need your feedback, so please send it along and let us know what you think. | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00306.warc.gz | CC-MAIN-2020-24 | 1,931 | 5 |
https://mods.marin.nl/plugins/viewsource/viewpagesrc.action?pageId=22381142 | code | Release aNyMOOR v1.2.7 - Februari 2016
- Number of lines in the TermSim configuration is increased to 50.
- Number of fenders in the TermSim configuration is decreased to 20.
Release aNyMOOR v1.2.6 - October 2014
Bug fix v1.2.6:
- The failures of hawsers didn't work (ANYMT-22)
Release aNyMOOR v1.2.5 - September 2014
Bug fix v1.2.5:
- When batch runs were started from a DOS-box with the BatchRunAnysimDLL.exe the batch looping process stalled after each completed run. (ANYMT-21)
Release aNyMOOR v1.2.4 - August 2014
Bug fixes v1.2.4:
- The draft was not properly accounted for in the direction of the normal of the fenders. The direction is kept in the range -pi..+pi to ensure correct values.
- The index of the output signals of the X-, Y- and Z-components of the fender force was not correct (ANYMT-18).
Release aNyMOOR v1.2.3 - August 2014
Bug fixes v1.2.3:
- Wave spectrum input fields (Gamma, Sigma, Random seed) are now correctly greyed out depending on selection (ANYM 142)
- When using a wave train, the keywords Hs, Tp and Waveseed will not be written to ini-file (ANYM 142)
- The error "Failed to find the native call to the mooringline calculation" has been fixed (ANYM 344)
- When using a wave train a warning message is shown when the wave train sample time is not equal or a multiple of the simulation time step (ANYM 330)
- When a wave train is shorter than the simulation a clear error message is shown and the simulation won't start
- For regular waves the random seed need not be set, the wrapper fills in a 1 because it is compulsory in aNySim (ANYM 336)
- In the tab "Signal selection" the Min and Max values are now actually used, in previous versions only the default values were used (ANYM 349)
- Minor database corrections when opening a new project (does not affect calculations)
- When using a regular wave in combination with Aranha damping, the Aranha damping is switched off. User is notified
- When using a regular wave as first wave input, the second wave input (swell) is disabled
- When wave parameters are entered which are outside the permitted range, the user is notified
aNyMoor v1.2.3 release date is 18 February 2014.
Release aNyMOOR v1.2.2 - January 2014
Bug fixes v1.2.2:
- Fixed bug in fender normals (ANYM-342)
- Small bug fixes.
aNyMoor v1.2.2 release date is 12 December 2013.
Release aNyMOOR v1.2.1 - October 2013
Bug fixes v1.2.1:
- Fixed wrong calculation of catenary span for MBM lines (ANYM-333)
- aNyMoor provides a warning when z=0 is defined for anchor points. Fixes crash (ANYM-334)
- User defined material properties are now copied correctly when defining a leg with small diameter changes (ANYM-335)
- aNyMoor now calculates the fender normals in the direction from connection 1 to connection 2 (ANYM-337).
- aNyMoor now checks whether the user defined compressions of fenders are in increasing order. If not, the wrong value is removed and a warning is given in the DOS-box. The simulation will run, but with less points in the compression-load table (ANYM-338).
aNyMoor v1.2.1 release date is 21 October 2013
Release aNyMOOR v1.2 - January 2013
aNyMoor v1.2 is the successor of aNyMoor v1.1.6 which was only available for JIP participants. Now the software is also commercially available with a complete redesign of the workflow including:
- Input checks
- Error warnings
- Copy/paste from clipboard/excel
- Updated to latest OCIMF coefficients (2008)
- More efficient memory use
- Windows 7 compatible | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305317.17/warc/CC-MAIN-20220127223432-20220128013432-00546.warc.gz | CC-MAIN-2022-05 | 3,464 | 48 |
http://konstantin.gavras.de/projects/ | code | University of Mannheim | Data Enthusiast | R & Python User
e-Mail: [email protected] @kongavras Follow @KostaGav
In this small project, I provide R code to scrape news paper articles from the web archive. As an example, the script runs on faz.net news articles from 2018. Written with easy to read R code and outputs a tibble including all news articles and the accompanying meta data. Full scaling to other news papers is not easily possible, since the HTML tags used to container the articles differ heavily between the newspapers.
You can find my code here
The independent and non-commercial homepage Wahlrecht.de provides researchers, journalists and the interested public with information about all election results and the accompanying polling data from the biggest polling institutes in Germany. However, the data is not provided in an easy-to-use format, but still saved on a classical HTML homepage. From this, one is able to retrieve the data using web scraping and data cleansing methods. In this project, I show how to use the software R and the package rvest to scrape the polling data and create a nicely formatted data frame using simple R base.
You can find my code here
Privacy disclaimer: Do not scrape homepages when you are unsure whether the Terms of Service do not provide permission to extract information automatically. To see the relevance of this problem, please read the following blog entry. The author does not take any responsibilities when applying the code to any non-personal homepages.
The Left: Code
In 2017, I was pleased to participate at the CorrelAid Meet-Up in Hamburg. During this workshop, we welcomed several new members in our network and were able to develop new project ideas with four incredible NGOs from different fields of civil society. On the second day of our Meet-Up we launched several workshops in an open space session. I was invited to give a short introduction to Python and felt honored to give a two hour introduction to over 30 participants.
Since I believe that my code should be publicly available, please find my commented script (in German) here.
From March to September 2016 I was the team leader of our CorrelAid project with the Association of Debating Unions at Universities (VDCH) and the German Debating Society (DDG). VDCH and DDG are committed to promote an active debating culture at German universities and asists students to develop their talents and skills. They asked us to develop and implement a member survey to get to know their members, their demands and wishes. Building on these results they aimed to sharpen their organisational and content profile. Together with four project members, we developed a survey from scratch, implemented it online, pre-tested it and launched it to the members of the VDCH and DDG.
After data acquisition, we analyzed the results and developed a 90 pages full report on the results and implications from the survey, drafted a short report for the members and presented our results at the yearly full assembly of the VDCH. It was the first time that the debating unions received a full overview of their members and realized that their concept spread all over Germany, Austria and Switzerland. For me it was a great experience to coordinate a research team and generate scientific results which have a direct impact for civil society. I am also very proud on my project team, Mirka, Lisa, Thomas, and Fabienne, who did a great job during the project.
If you want to learn more about our project, please read the interview I gave the VDCH, or read our short report online. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00369.warc.gz | CC-MAIN-2023-14 | 3,586 | 13 |
https://ideas.atlassian.servicerocket.com/?page=40&project=SFJ&status=6451850983632047430 | code | Using JIRA System's language instead of User's preferential language to push values to Salesforce (e.g. JIRA Status)
At the moment, somTe native JIRA fields changes depending on the language set by the user.
For example, in the case of native JIRA Status "Open":-
Due to this, depending on the language that... | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100651.34/warc/CC-MAIN-20231207090036-20231207120036-00762.warc.gz | CC-MAIN-2023-50 | 310 | 4 |
https://forum.uipath.com/t/data-table-add-data-row/262402 | code | Dear Uipath Community,
I am struggling right now regarding adding different “Root Cause Types” for one Claim Numbers.
I have created a data table with the following headers : (see the pic below)
Till here, it works pretty good and The Bot writes the value to this excel sheet withiout any issues.
However, in the excel sheet, you can see column no. from “J” to " L" . There are around 3 to 4 root causes only for a single claim Number (Column A) and the BOT should extract all the root causes for a single Claim Number from the web page and pass it to the excel file.
However**, BOT passes always the last root cause type to this excel file.**
How can I extract all the root causes for a single Claim Number (Column A)?
Overview of “Add Data Row” : Where OutDT is the name of data table created by me !!
Waiting for your response and support !
Thanks & Endeavors
So when you say 3 - 4 root cause for a claim that means 3-4 rows with same claim number right?
And can you please show how you are taking the data from source website and passing it to add row?
are you filtering out the extracted data from website for particular claim number and then adding to it?
Thanks for your response.
Ya u are right. There are 3-4 rows with a same claim number. I have used “Data Scrapping” in order to extract the information from the data table from the website.
Then I have used for each row activity, so that BOT reads one by one each and every rows and pass the value to the data table (I created manually). +
PS: Claim Numbers I have already captured it in the starting of the process (See below)
After extracting this claim Number, Bot clicks on it and then a new “Pop up window” will be opened, from where I have to extract 4 diffferent root causes " as shown below:
However, BOT only gives the value of the last rows (LABEL). I would like that BOT should give me all 4 root causes like “Packaging, Packaging, Label & Label”
If you want, we can share our email ID and then I could show you , "how I extracted information from source website and how did I pass it !
My Email ID:[email protected]
Waiting for your response !
Yes, may be we can connect and see it once if I am able to provide some relevant solution. Because I will have to check how you are extracting these root causes from website.
Email Id- [email protected]
This topic was automatically closed 3 days after the last reply. New replies are no longer allowed. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00659.warc.gz | CC-MAIN-2022-27 | 2,457 | 25 |
https://www.reddit.com/r/sysadmin/comments/25dlm8/remote_control_a_rdp_session/ | code | use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
504 users here now
A reddit dedicated to the profession of Computer System Administration
Community members shall conduct themselves with professionalism.
Do not expressly advertise your product.
For more information about the rules, posting guidelines, and subreddit policies, visit here
For IT career related questions, please visit /r/ITCareerQuestions
Please check out our Frequently Asked Questions, which includes lists of subreddits, webpages, books, and other articles of interest that every sysadmin should read!
Checkout the Wiki Users are encouraged to contribute to and grow our Wiki.
So you want to be a sysadmin? RTFM
Official Subreddit IRC Channel - #reddit-sysadmin on irc.freenode.net
This is an archived post. You won't be able to vote or comment.
Remote control a RDP session? (self.sysadmin)
submitted 2 years ago by Legionof1Jack of All Trades
Does anyone know a good way to remote into a RDP session (not terminal services). I would very much enjoy a free option for this. VNC does not work, remote assist has sorta worked but I can only view and not control for some reason.
[–]r5aboom.ninjutsu 3 points4 points5 points 2 years ago (5 children)
[–]Hegelund 0 points1 point2 points 2 years ago (4 children)
What I would have done...
[–]Legionof1Jack of All Trades[S] 0 points1 point2 points 2 years ago (3 children)
No budget for that unfortunately.
[–]MethodH22Appointed IT guy -1 points0 points1 point 2 years ago (2 children)
free is in your budget.
[–]Legionof1Jack of All Trades[S] 0 points1 point2 points 2 years ago (1 child)
I am very strict with myself on what I do and don't do with corporate environments. I will pirate adobe all day long to put a cats head on my buddy but if I am in a business setting I won't pirate/use software outside of its designated design. Hell if I was like that I would use my MSDN account and deploy one hell of an environment into this office.
[–]Hegelund 0 points1 point2 points 2 years ago (0 children)
Fair enough - Any of these work for you then?
[–]dgretchIT Manager 2 points3 points4 points 2 years ago (3 children)
I'm confused as to the question here. Are you saying you want a remote control tool that will start a new session on the host machine when you connect, as opposed to VNC, which shows you what is actively happening on the host when you connect?
[–]Legionof1Jack of All Trades[S] 0 points1 point2 points 2 years ago (2 children)
I want to be able to remote to the RDP session itself.
So the setup.
Thin PC VM
Thin client User1 RDPs to Thin PC
I wan't to remote into the Thin client User1's RDP session so I can provide support.
[–]Kerackiswhack 1 point2 points3 points 2 years ago (0 children)
I know on the host, RDP into the Thin's host, go to Remote Desktop Sessions whateverit'scalled in Admin Tools, find the server, and session, right click-remote control.
You have to RDP in though, cannot local/vCenter log in.
[–]henazo 0 points1 point2 points 2 years ago (0 children)
What your describing is possible with SCCM, but that is not really free.
[–][deleted] 1 point2 points3 points 2 years ago (4 children)
2012 doesnt have this, 2008 r2 still have it
This is a 2008R2/Win7 environment
[–][deleted] 1 point2 points3 points 2 years ago (2 children)
so just enable role "Remote Desktop Service" and open Remote Desktop Service Manager..
[–]Legionof1Jack of All Trades[S] 1 point2 points3 points 2 years ago (1 child)
This worked, your awesome.
[–][deleted] 1 point2 points3 points 2 years ago (0 children)
[–]danekan 1 point2 points3 points 2 years ago (1 child)
Do you have UAC enabled? Are the users local admins if so? If not, you get locked out when they can't respond to the UAC I believe.
Also, are you an admin yourself to the machines you're offering remote assistance with?
IMO trying to use 'offer remote assistance' is not a good solution in an enterprise.
IMO the best solution to this problem is Dameware. It's not free but is licensed such that you license the techs, not each system. It is by far the most robust remote access solution I've come across though.
If you d/l -- give fake info as Solarwinds has taken it over and their sales people are aggressive.
Your VNC taking you to a command prompt sounds like a problem that's fixable. Are you starting the VNC server on :1 ?? not :0?
[–]Legionof1Jack of All Trades[S] 0 points1 point2 points 2 years ago (0 children)
Its a very small office and I have a budget of negative numbers lol.
I just disabled UAC via GPO but I will need to wait till tonight to apply it to everyone (reboot required).
I am the admin.
I don't download so that is not an option for a deployment environment.
I am no VNC expert so I dun even know what your asking there.
[–]Dorion_FFXISecurity/CCTV 1 point2 points3 points 2 years ago (1 child)
You get what you pay for.
Your tellin me.
[–]Techis1332 0 points1 point2 points 2 years ago (1 child)
Don't you have to request control in remote assistance?
I may be wrong but I think thats how it works. Im not sysadmin but I do tech support.
I request it but for some reason it doesn't let me in, I am still playing with it though because it works pretty well.
[–]crazykillaSysadmin 0 points1 point2 points 2 years ago (3 children)
I can't think of a reason that you would want to remote control a RDP session unless it was running on a terminal server. A Server OS with RDS role installed does this pretty seamlessly, especially if you configure the users in AD to not require permission to take remote control.
Logmein used to do this before they did away with free solutions.
Can you explain a little more about what you're trying to accomplish and the OS's involved?
No budget for RDS unfortunately :( I would have LOVED that option.
[–]crazykillaSysadmin 0 points1 point2 points 2 years ago (1 child)
Standard licensing should allow for remote connections to itself(within a certain number of concurrent conenctions) If nothing else you could install the RDS role and play with it. If it is everything you hoped and dreamed it would be, hit the brass up for some $$ for licensing.
I may be wrong about that though.
Its just going to be 1 user/VM so no real need for RDS. I am just trying to find a solid solution for end user issues.
[+][deleted] 2 years ago (2 children)
You are either making stuff up, or everything you just said went WOOOSH
[–]WG47 0 points1 point2 points 2 years ago (3 children)
Why not just fix whatever's causing VNC not to work?
[–]Legionof1Jack of All Trades[S] 1 point2 points3 points 2 years ago (2 children)
VNC works, but it takes you to a terminal (like if you plugged in a monitor or opened the Hyper-V console). RDP works in a different way.
[–]djdementia 1 point2 points3 points 2 years ago (0 children)
Is the 'technician's PC always the same PC? If so you can setup a 'reverse VNC' so that when the user within the RDP session clicks an icon it will automatically send a VNC connection to a technician with a preconfigured VNC 'listener'.
[–]assangeleakinglol 0 points1 point2 points 2 years ago (0 children)
What if they remote in with mstsc /admin || mstsc /Console? I have no idea if that would work but maybe?
REDDIT and the ALIEN Logo are registered trademarks of reddit inc.
Advertise - technology
π Rendered by PID 71986 on app-440 at 2017-03-30 01:15:12.458162+00:00 running e1e1541 country code: US. | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191444.45/warc/CC-MAIN-20170322212951-00189-ip-10-233-31-227.ec2.internal.warc.gz | CC-MAIN-2017-13 | 7,578 | 93 |
http://terminaltwister.com/how-to-run-a-cronjob-every-odd-or-even-minute/?shared=email&msg=fail | code | I’ve tested this for Vixie Cron and Cronie.
There is an “extensions” section on the crontab(5) man page which says – Ranges can include “steps”, so “1-9/2” is the same as “1,3,5,7,9”.
*/2 * * * * date >>/tmp/even 1-59/2 * * * * date >>/tmp/odd
If you require more portability across different versions, you may want to stick to the much simpler list:
But it might be easier to write a shell script which is called every minute and wrap your command in it, which will immediately exit if it is not called in an odd minute. | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591831.57/warc/CC-MAIN-20180720193850-20180720213850-00435.warc.gz | CC-MAIN-2018-30 | 541 | 5 |
https://wordpress.stackexchange.com/questions/335035/best-practices-popular-methods-for-distributing-a-program-with-a-plugin | code | Are there any best practice guidelines regarding how a plugin should distribute a third-party program with it's own installation? If there are no guidelines, then what are the most popular methods?
For example, our plugin requires the installation of a third-party (GPL compatible) program. To ease installation, the third-party program would be included with our plugin and it will offer to run a version of the program that is compatible with their hosting environment that is included in our plugins assets directory (or other directory?). Updates to the third-party program would be handled by updating it along with our own plugins updates ([our version].[third-party-version]). | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00657.warc.gz | CC-MAIN-2023-50 | 683 | 2 |
http://www.hiddenpath.com/forum/showthread.php?125-No-Leaderboard-info&p=1964&viewfull=1 | code | I'm seeing achievements fine, but in the last couple days the leaderboard has stopped working. I'm able to ping the stats.hiddenpath.com fine. I was seeing info fine. I've waited overnight to see if it would fix itself as this has happened before. I rebooted, validated the installed.
Thanks for your time!
P.S. Can you remove some of the people from the leaderboard that have impossible scores? Millions of points on some of these is... really not possible. | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00071-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 458 | 3 |
http://www.skepticats.com/rox/wrappers.html | code | Site: Home | LnBlog | The Joy of ROX | LinLog
The Joy of ROX: Main | News | Software | Wrappers and Resources | Contact
This page collects AppDir wrappers I've made and some resources I've put together for making ROX application directories. Hopefully, they'll help you get your AppDirs up and running faster and make them more powerful and flexible than before.
Here are some application wrappers that I wrote. These are really more than just simple application wrappers such as you would get from the ROX-Wrappers project. Those are mostly just simple "exec prog" type wrappers, whereas these are a bit more complex. They have features like custom AppMenus, intelligent handling of commandline arguments, and even simple GUI front-ends. I like to think of them more as ROX front-ends than wrappers, but that's a different story.
Most of the AppDirs that are written in C or some other compiled language use AppRun scripts that are copied more or less exactly from AppRun script of ROX-Filer or ROX-Session. There's nothing really wrong with these scripts; they do their job and work on pretty much any UNIX around. However, they were written with a definite prejudice toward AppDirs of single graphical programs. While this is understandable (how many ROX projects aren't single, graphical programs?), it makes things difficult when you wish to write or repackage a program that has multiple executables or that is used from the command line.
To address this, I've made some extensions to the stock AppRun script. These extensions include:
I have placed a copy of this extended AppRun script in the AppDir template below. The documentation that comes with the template includes a description of the AppRun.
This is a sample AppDir that I put together. It contains a copy of the extended AppRun script described above, an AppInfo.xml file with blank fields to fill in, and a Help directory with a copy of the GNU GPL and a description of the AppDir template itself. I recommend this as a starting point for repackaging programs as AppDirs or writing your own AppDirs.
I've written a paper on how to repackage an existing program as an AppDir. It's not a step-by-step tutorial or anything, just some tips, tricks, and things you should aim for. It's still a work in progress, but it is becoming pretty substantial and should (hopefully) be helpful. I also wrote up a few random thoughts on creating AppDirs, if you're interested. This one is much shorter and includes some details on dealing with repackaging programs that need to be run at the command line.
I've written a small Python program called PyMessage that is very useful for making AppDirs. It pops up a small window with a message, some buttons, and, optionally, a text entry box or a combo box. You run it from the command line and can access the button pressed via the exit code (or standard output, if you specify the --print option) and the text of the entry box/combo box via standard output. It's great for when you need the user to make a choice or give you some quick input. It's the perfect replacement for xmessage, which is ugly, or gxmessage, which is unavailable on many systems. It requires only ROX-Lib2 and Python, so it should work on any system that can run a 1.9.x version of Archive. If you want some examples of it's use, look at the Secure Shell and CD-Burning wrappers listed above.
I have put together a small package that I call
ROX-CLI. It is a group of
programs, written mostly in Python, designed to make it easier to deal
with AppDirs from the command line. It contains scripts for things
like running an AppDir, getting to its Help directory, finding its
Choices directory, and so forth. The scripts use case-insensitive
searches and environment variable to find AppDirs. The effect is
similar to the search paths used by the shell.
This project is still in the early stages and there are a lot of directions it could go. Any feedback on it, including bug reports and feature requests, would be welcome. | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124297.82/warc/CC-MAIN-20170423031204-00532-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 3,995 | 19 |
https://help.storied.co/article/83-adding-a-spacer-widget | code | Adding a spacer widget
Spacers are empty objects that can be used to: push elements aside so the background of the page can becomes visible, position other elements, or add background image to the spacer itself.
1. Select the blue "plus sign" button to open the widgets panel for the section you want to add your spacer to.
Select the spacer widget.
2. The spacer height can be change by using the handles at the top and bottom of the element.
The height will also be expressed in percentages of the screen height.
Additional limitations can be set using the min and max for both height and width.
All normal settings are available: styles, classes, attributes, background, align, border, ... | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473401.5/warc/CC-MAIN-20240221070402-20240221100402-00370.warc.gz | CC-MAIN-2024-10 | 692 | 8 |
http://software.dell.com/products/recovery-manager-for-active-directory-forest-edition/ | code | Simplifies restoration from an Active Directory forest disaster
Quickly restore your entire domain or AD forest in the event of a major disaster or AD corruption. Forest Edition greatly reduces downtime by selecting unaffected backups and then quarantining the damaged environment. The steps required to facilitate the recovery are automated, saving your organization lost productivity and revenue.
Requirements for recovering a domain or forest are simplified by leveraging the backups from Recovery Manager for Active Directory. You also have the option to restore some DCs from backup and others through demoting and re-promoting them with DCPromo, which aligns closely with Microsoft’s native forest recovery approach.
- Automation – Automates forest recovery steps from a single console, including the ability to quarantine infected domain controllers, reset DSRM passwords, simultaneously restore/recover domain controllers, and automate other configuration tasks like setting the FSMO roles, raising the RID pool, DNS, etc.
- Flexible recovery pacing – Creates pauses to suspend the recovery process at any time during the forest recovery.
- Online delegated granular restore – Restores directory objects without restarting domain controllers.
- Comparison reporting – Highlights changes made to the directory data since the last backup by using reports that compare the online state of AD with its backup. You can also make comparisons between different backups. If you’re running ChangeAuditor, the comparison reports will also let you know who made the changes. Deleted or changed objects, including attributes, are displayed in reports.
- Comprehensive recovery options – Enables you to restore any object in AD, including users, groups, organizational units (OUs), computers, subnets, sites, configuration, Exchange storage groups and Group Policy Objects (GPOs).
- Attribute-level restore – Even when the object itself has not been deleted, you can still restore individual attributes, such as account settings, group memberships and binary attributes. This enables you to restore only the required attributes without affecting other attributes.
- Remote quarantine – Allows you to automatically and remotely quarantine corrupt domain controllers so they won’t replicate with the newly restored environment.
- Recovery planning – Generates a detailed recovery process roadmap. This overview of every recovery stage and operation allows you to gain a better understanding and more control of every aspect of the process.
- Simultaneous system recovery – Restores domain controllers in your domain or forest simultaneously from one central console. This eliminates the need to manually interface with each domain controller separately, saving a significant amount of time and effort.
- AD management and health – Validates the health of Active Directory for warning signs of possible AD issues before they become disasters, including DC accessibility, replication, trusts and user authentication. Resets DSRM admin password and manages FSMO roles, GCs and DNS.
- Virtual lab environment – Builds a separate virtual forest test lab with production data to test forest disaster scenarios and safely perform testing prior to making changes in the production domain or forest.
- Consolidated backups – Speeds your time to resolution by enabling you to consolidate backups created by different Recovery Manager Console instances.
- Diagnostic data collection – Enables you to collect diagnostic data from all domain controllers in your recovery project by using the Forest Recovery Console as a central location. You no longer have to interface with each domain controller individually to collect logs and other AD data in order to determine the cause of a recovery failure or determine why Active Directory is not functioning properly.
Before installing Recovery Manager for Active Directory Forest Edition, ensure your system meets the following minimum hardware and software requirements:
|Processor||Minimum: 1.4 GHz
Recommended: 2.0 GHz or faster
|Minimum: 1 Gb
Recommended: 2 Gb
These figures apply only if the Active Directory domains managed by Recovery Manager for Active Directory Forest Edition include 1 million objects or less.
|Hard Disk Space|
Full installation including the prerequisite software: 1.5 2.0 GB of free
Full installation including the prerequisite software: 2.7 GB of free disk
In case all the prerequisite software is already installed: 210 MB of free
Note: Additional storage space is required for a backup repository, at least the size of the backed-up Active Directory database file (Ntds.dit) and the SYSVOL folder plus 40MB for the transaction log files.
|Display||SVGA at 1024 x 768 or higher|
Targets for backup, restore, or compare operations
|Microsoft .NET Framework||Your computer must have all of the following versions installed:
|Microsoft SQL Server and Its Components||Microsoft SQL Server |
One of the following versions is required:
Microsoft SQL Server components
All of the following components are required:
Microsoft SQL Server Reporting Services
To display reports, Recovery Manager for Active Directory Forest Edition can integrate with Microsoft SQL Server Reporting Services (SRSS) 2008, 2008 R2, 2012 and 2014.
|Microsoft Windows PowerShell||Microsoft Windows PowerShell version 4.0, 3.0 or 2.0|
|Microsoft Windows Installer||Microsoft Windows Installer 4.5|
|Microsoft Management Console||Microsoft Management Console 3.0|
|Integration with ChangeAuditor|
for Active Directory
|To provide information on who modified particular Active Directory objects, Recovery Manager for Active Directory can integrate with the following versions of ChangeAuditor for Active Directory: 5.6, 5.5, 5.1, 5.0, 4.9, 4.8, 4.7, 4.6, and 4.5.|
If any prerequisite software is not installed, the Setup program automatically installs it for you before installing Recovery Manager for Active Directory Forest Edition. If the prerequisite software to be installed is not included in this release package, it is automatically downloaded.
Forest Recovery Agent
Before installing Forest Recovery Agent on a domain controller, ensure the domain controller meets the following minimum hardware and software requirements:
|Processor||450 MHz or faster|
( overall RAM)
|256 MB (1 GB recommended)|
|Hard Disk Space||x86 system: 850 MB or more|
x64 system: 2 GB or more
|Display||50 MB or more|
|Operating System||One of the following operating systems:
|Prerequisite Software||Microsoft Windows Installer 4.5 or later must be installed.|
Active Directory Virtual Lab
|Supported virtualization infrastructure||Requirements|
|Microsoft System Center Virtual Machine Manager (SCVMM) 2012|
without Service Pack or with Service Pack 1
|Software that must be installed on the Active Directory Virtual Lab computer:
|VMware vCenter/ESX Server 5.0, 5.1, and 5.5|
Recovery Manager Portal
|Processor||1 GHz or faster|
|512 MB or more|
|Hard Disk Space||x86 system: 850 Mb or more|
x64 system: 2 Gb or more
|Monitor||SVGA at 1024 x 768 or higher|
|Operating System||Installation |
You can install the Recovery Manager Portal on a computer running one of the following operating systems (both x86 and x64 platforms are supported):
|Web Browser||To access the Recovery Manager Portal, you can use Microsoft Internet Explorer 8 or higher.|
|Microsoft .NET Framework||Microsoft .NET Framework version 4.5|
|Microsoft Internet Information Server|
Microsoft Internet Information Services (IIS) 8.5, 8.0, 7.5, or 7.0.
|Microsoft SQL Server and its components|
Microsoft SQL Server versionsOne of the following versions is required:
Required Microsoft SQL Server componentsAll of the following components are required:
Forest Recovery Console
Recovery Manager provides a central console to manage your entire forest recovery.
Check Forest Health
Validate the health of your Active Directory forest and manage domain settings.
Active Directory Virtual Lab
Create a virtual lab using production data with the Active Directory Virtual Lab wizard.
Active Directory Virtual Lab
The Active Directory Virtual Lab wizard utilizes existing third-party virtualization software. | s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111455.18/warc/CC-MAIN-20160428161511-00042-ip-10-239-7-51.ec2.internal.warc.gz | CC-MAIN-2016-18 | 8,212 | 82 |
https://www.jiskha.com/display.cgi?id=1356434863 | code | posted by allison-please help!!! .
Solve the following inequality.
x^3+9x^2-108 less or equal to 0.
Then write your solution in interval notation. I am sooo confused and do not have any idea how to do this. Can someone please show me step by step please???
Thanks in advance. Someone who is confused.
college algebra -
This polynomial graphing might help sometimes:
zeros at x = -6 (double zero) and x = +3
so we really have
It is negative for large negative x
it comes up and bounces off the x axis at x=-6, dropping down negative again
it does not actually come up and cross the x axis until x = 3
from then on it is positive,
therefore negative for
x</= +3 except for touching zero at x = -6 | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889798.67/warc/CC-MAIN-20180121001412-20180121021412-00602.warc.gz | CC-MAIN-2018-05 | 694 | 15 |
http://christopherleeweb.com/forums/generalised-topics-cl/christopher-lee-gentleman-grauens | code | I received a DVD of this more than one member of CLW. Magnificent! This is Christopher Lee, Grauens Gentleman. I was very happy also to have the opportunity to see Christina doing your feedback. Unfortunately, this is in German, but when the DVD distribution in Brazil, I'll be ahead of everyone to buy it. | s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507449615.41/warc/CC-MAIN-20141017005729-00340-ip-10-16-133-185.ec2.internal.warc.gz | CC-MAIN-2014-42 | 306 | 1 |
https://www.experts-exchange.com/questions/26983841/Problems-printing-over-VPN-and-RDP-session.html | code | We have a user on Windows 7 who VPNs (PPTP on SBS 2008) to remote office and then RDP to his desktop. He has a home printer HP 2200 LJ and printer mapping is enabled in the RDP conection. The printer shows on his remote computer but when he tries to print to it the print job never completes and sits in the quote. There is no printing or TS errors in the logs on both PCs. What could be the issue? | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864387.54/warc/CC-MAIN-20180622084714-20180622104714-00481.warc.gz | CC-MAIN-2018-26 | 398 | 1 |
https://medium.com/@akhtar016/react-js-fundamental-and-create-your-first-react-app-react-basic-9b885f183653 | code | In this article, I’m going to discuss the react js and how you can create your first react app.
After hearing the word “React” the first thing that came to my mind is What is React js? Why do Js developers use it? If you have the exact same questions that I had then no worry. I’m going to clarify all of these questions.
What is React js?
Beginners often ask, is React JS frontend or backend? The answer to the question is the frontend.
Create Your First React App
Before moving on some prerequisites are needed to get started with React Js.
Here are the few prerequisites to get started -
- You have core knowledge in HTML and CSS because HTML and CSS is used to create UIs for ReactJS Apps.
- You have to get familiar with Application Development Tools such as VS code etc.
Now in this part, I’m going to show you how you can create your first react app. Before working on react, you must have NodeJs and NPM installed in your computer. If you haven’t installed then you can download NodeJs LTS version from here. Then set it up in your machine and it is easy.
Now open your command prompt and run the following two commands.
After running those commands you see it shows you a version. If it doesn’t show you then try to reinstall NodeJs. If you see the version number like this then you are ready to move on.
As you are reading this article I assume that you are a developer and you have VS code installed on your computer.
In order to create a react app, you will have to use simple two or three steps. It seems easy, isn’t it?
Now, search on Google with the keyword “Create React App”. Then you will see search results like the following picture.
Among this, I’m going to follow the second website, you can follow the other websites too. If you click on the second website link like me and do some scrolling then you will get the “Quick Overview” section out there.
These three commands have to follow creating a react app -
npx create-react-app my-app
If you can use these three commands your first react app will be installed on your local computer.
Now open the VS Code…Click on the File (which is in upper left corner) then you will get a drop-down list like the following photo.
Now click on the Open Folder option, then you will get an option to choose your folder where you want to create your react app.
In my case, I select the projects folder because I want to create my react app inside this folder. After selecting the folder it will open it up in VS code like the following photo.
Now click on View tab which is in the menu bar and then you will get the drop-down list like the following photo.
Now click on Terminal then you see the terminal is opened like the following image.
Here you can see the folder you have selected is opened in the terminal. Now it is the time to run those commands. At first, we have to run this command in order to create a react app.
npx create-react-app my-app
One thing you have to remember is that when you run the command — npx create-react-app my-app this will not only create a project but also it will create a folder for you. So earlier we have only opened a folder where we want to keep this project and didn’t create a project folder. And you also have to remember in the place of my-app, you have to give your project name. Suppose, you want to create an app for ABC organization, then you can give, for example, ABC-Organization. In my case, I’m going to give my project name demo-project as I’m creating this project just to show you.
Now copy the code, paste it on the terminal, remove my-app and give your own project name like mine. After doing this click ENTER, then you can see it started creating your project. Now, wait until you get this like the following picture.
When you get this now it is time to use the next command…
In your case, it will be like cd Your-project-name that you have given. Now hit ENTER then paste the following command
Now all the steps that is needed to create a react app are done, you can see this in the terminal.
Besides that, a tab is opened on your browser like this.
Now you see this on your browser. Woohoo!!!!! you have done this.
Congratulation!! You have created your first react app.
If you like this article, please do share it. | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141692985.63/warc/CC-MAIN-20201202052413-20201202082413-00185.warc.gz | CC-MAIN-2020-50 | 4,275 | 35 |
http://www.pixelstech.net/search.php?key_word=SemanticsCSS | code | SEARCH KEYWORD -- SemanticsCSS
A collection of thoughts, experiences, ideas that I like, and ideas that I have been experimenting with over the last year. It covers HTML semantics, components and approaches to front-end architecture, class naming patterns, and HTTP compression. We shall not cease from exploration And the end of all our exploring Will be to arrive where we started And know the place for the first time. T.S. Eliot — “Little Gidding” About semantics Semantics is the study of the relationships between si...
- How can Big Data make the impossible, possible?
- How Custom Android Apps benefit Retail Industry
- Five Prevailing Trends in iOS Application Development
- Why do I need a debugger?
- Resizing and Watermark of Images
- How to Keep Your Java Cloud Apps Running Even In a Cloud Outage
- Celebrity News
- Use of java in the banking sector
- Things to Look for in Microsoft .NET Framework 4.7
- You should publish your contributions of Open Source, even it’s not required | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427749.61/warc/CC-MAIN-20170727062229-20170727082229-00145.warc.gz | CC-MAIN-2017-30 | 1,003 | 12 |
https://www.freelancer.sg/job-search/max-height-max-width-image/ | code | I need several rendering images in 3ds Max. Rendering imaged must be high quality and very photo-realistic. Basic floor plan will be provided to the owner. Thanks for your interest.
We recently created a retrospective for our organization's 20th anniversary celebration. The retrospective included 10 large banners covering 10 aspects of our work in global health and cancer control. We now want to turn this into an online retrospective on our website ([login to view URL]). We want to develop a single webage with the 10 banners and a bit of copy. The content, graphics, and...
Checking the status/health of...the tire of his vehicle through camera, the app should detect whether the tire needs to be replaced or not. There are few concepts through which it can be achieved through image/video processing. Revert if you are an expert in this field or have confidence in solving one of the most practical issues which affects lives.
Hi all because of demanding work we are looking for more reliable app developers that we can outsource our work to on frequently. You must be able to develop IOS and Android Apps Game apps Please link any apps that you have made before The maximum budget for these are $1000-only. Do not bid higher than this.
My project...semi-realistic lights and materials (various types of metals). I provide the 3dsmax file and i need back the finished file and a rendered animation (720 frames). 3ds Max 2019 I'm not 3ds Max expert, so if you ask me tech stuff maybe i won't be able to answer. First 3 pictures is what i have. Last 2 pictures is what i need (with no texts).
Install server oroCommerce Community - Install orocommerce b2b server 3.0 RC - Leave OroCommerce Email Service Running the routine of import and export send an email when done (products, customers) this routine should be working and sending the notification emails - Add specified tasks in CRON - Perform procedure according to specifications [login to view URL] - Add backup jobs in cron I alr...
I have a website that is done and fulling working. Except I need to make a few of the graphic (Umbrella) spin clockwise direction. Looking for someone there is very experienced in working with doing website JS animation. Non complex animation. To start and complete by this weekend.
We need 3 videos of turbine silhouette with moving blades in different angles. Also it needs to replace wordmarks inside 3ds max file. We have 3ds max file without textures. Pic of how should look the silhouette uploaded.
Parallax, full-width, responsive website for saas app, wordpress We have actually website but there are many bugs, we need fix it or create from zero to hero. We dont have one week, we want do that in 3 days :) Show me Your the best site ith good google page speed value and parallax effect.
Looking for a small website maybe with a blog. Ideally we would be giving away an e-book to collect email addresses. It could potentially be just a 1 pages site. Our business is an instagram growth service and we are promoting a free e-book. That is the purpose of this site. We would like on and off page SEO as well please.
HTML page design - one simple page - good looking, in 1 hour max
custom woocommerce success page design - should be completed in 2 hours max. pls don't bid if you are not good with front end designing
System is not required just small peace which POS paper format
We are looking for a developer to develop a Blender and/or 3D Max file parser. We don't need the entire object model parsed. We are interested in the following information: Camera(s): -Name of camera -Location of camera Time line: -Number of frames -Keyframes Materials: -Material names -Material types -Used Texture maps Meshes: -Mesh/object names
I need images in posts to be 90% tall or something like that. at the moment images are really too small and don't take all available height. please only get in touch if you can finish it today and don't ask any more questions as I am really busy and I think I have already explained what I need (look at the images attached) see website : http://www
pls refer to [login to view URL] to design company signboard size 20ft(width) x 3ft(height) submit Ai original file modern, simple, buildng control technology based
Wordpress front end developer needed - URGENT TASK TO BE COMPLETED IN 2 HOURS MAX
I need a PROFESSIONAL website facebook developer to reset my face book page and configure the website to provide seamless integration between website and social media
Hi we are looking for native professional French Canadian ( Accent Quebec French !!!!!) VO to record max 1 min audio. If you are interested please send us your demo (as the attachment). Please confirm your availability in 24 Thank you!
I have Classipress wordpress website and I would want a different metaslider image at the each page. However, I would only need the metaslider image to be full width. The rest should remain the same. Please see attached file for your reference. The image is boxed in RED.
I'm looking for dev who can help me search and add to wordpress good template, next copy content from exisiting website, change styles and add seo optimizations.
Just to build model from the layout plan, i have done a little, just have to follow up with mental ray rendering, lighting and 8 seconds animation.
We more visitors on our website. The visitors most come from .dk (Denmark). We have 500 visitors today. External SEO is on other servers we can manage. We don't care about how the SEO is designed, but no banned results on google. After test and we have approved the system we release milestone. No prepayment.
I require a full column width section in my modern pages in the root site collection. Currently this layout is only accessible using the communciation site template; however we’d like to use this layout on our tenant homepage. Budget £100.
I have a bunch of files in various 3D formats for wax candles with flames and their associated textures. I need one of these candles (#05) and its flame: 1) properly converted into format(s) and dimensions compatible with ARKit 1.5 2) added to an empty XCode project with a single ARKit-based view controller 3) added to a simple ARSCNView so that they are displayed as intended on a surface (e.g., ...
I have a logo and I need a few more versions of it. I need one with the background removed and one designed in wide format. The last one is for print and should be in corel max 15 format, converted to curves, with no gradients or shades and lines thicker than 0.8 mm.
...paint easel with a painting on it) 10. A Statue or a Monument (could be any type of stature or monument. Use your imagination. ) These models need to be designed in either 3ds Max, Maya or Cinema 4D (It’s entirely up to you) and need to be exported as .objs, .fbx’s and possibly .3ds files if possible There should be some type of textures or materials applied
[login to view URL] is the Best Online Casino Reviews website - We need a full width header banner and an animated logo - Header image(PNG) 2000px width x 400px height - Animated logo(GIF maybe?) 150px x 150px The header image should be something repetitive related to online casino, gambling, but not something with cards and dices only, something
I need a really simple windows program that can convert a given text to a video file. For example the program will take a text of 3000 words content and split it into 7 pieces and convert it to a video file of 7 slides. I want to keep it simple. You can do a php version of it too if you know how to.
This is quite a simple task, the work is done I just need someone to transfer it to another project which is the same. need knowledge in Ionic and also firebase is preferable. More details will be provided. | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591596.64/warc/CC-MAIN-20180720115631-20180720135631-00449.warc.gz | CC-MAIN-2018-30 | 7,762 | 30 |
http://sdi.thoughtstorms.info/?p=384 | code | And I thought I was ambitious, trying to write a programming language!
My friend Oli has decided to reinvent programming as we know it. Details are still trickling out via his web-site : Semantic Programming. And I’m in frenzied skype conversation with him, trying to figure out what it’s all about.
In outline, it starts from some intuitions behind the Semantic Web : that there should be a massively parallel, distributed graph-shaped database of facts (relational assertions) represented on different machines across the world. But it then layers programming on top of that. Instead of a passive data-structure crawled by scutters, SemProg agents (roughly, the servers which manage different data-nodes) are active. There is message passing between the facts themselves, and agents may have hardwired interpretation to act on some facts (what Oli is calling “axiomatic” understanding), or a “deductive” understanding (I guess rather like Prolog inference), and even a “behavioural” understanding via (I guess again) learning from observing other agents.
I’ll keep following this here on Smart Disorganized. Very interesting if it works out.
Would GeekWeaver support Semantic Programming? It seems like Oli is thinking of multiple editors for different types of information, all of which compile down to the same underlying graph-structured format so that the data can be combined. (Rather like Language Oriented Programming.) It seems quite possible that GeekWeaver could output something like his graph-format. I’ll certainly be experimenting.
I’m also trying to persuade Oli to look at Erlang as a potential implementation language for the distributed virtual machine. I’m increasingly impressed by Erlang. Finding it very powerful and concise. | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497309.31/warc/CC-MAIN-20200330212722-20200331002722-00450.warc.gz | CC-MAIN-2020-16 | 1,773 | 6 |
https://intfiction.org/t/i6-can-you-preserve-a-global-variable-after-restart-or-restore/45543 | code | I have a game with graphics and sound and allow you to turn them on or off. In both cases, the default is on.
I just realised that if I turn them off and restart a game, they turn themselves back on again. I’m presuming this is because I use Global variables to keep track of the status and when I restart a game, the global variables are reset. Similar thing for restore if the current status is different to the saved status.
So is it possible to save the status such that it is persisted after a restart or restore without resorting to saving to a file? | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890157.10/warc/CC-MAIN-20200706073443-20200706103443-00391.warc.gz | CC-MAIN-2020-29 | 558 | 3 |
https://www.reddit.com/r/emulation/comments/138rsk/how_do_you_all_exit_from_an_emulator_and_go_back/ | code | I've set up MAME with Project64, NEStopia and ZSNES (which for some reason won't load, keeps freezing up on my windows 7 machine). I've tried both HyperSpin and Mala as front ends. I can't for the life of me set a dual button exit to leave a game and go back to the front end. I've even set UICancel in the MAME Input Options menu to be a combination of 2 buttons on my controller. That will work if I load up a game using command line MAME but it doesn't work when it's ran from one of the front ends.
Also, does anyone know why ZSNES might be freezing and crashing my PC on startup? I have plenty of memory (6G of RAM and dual core).
Thanks so much. | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828010.15/warc/CC-MAIN-20160723071028-00205-ip-10-185-27-174.ec2.internal.warc.gz | CC-MAIN-2016-30 | 651 | 3 |
https://techleaf.in/tiktok-indian-apps/ | code | TikTok is famous among indians for short video social entertainment, funny interesting videos. Unfortunately last year govt of India banned 59 chinese apps which includes tiktok is also banned, As a result tiktok users are frustrated, searching for tiktok, using vpn but it’s not comfortable for all users. Certainly indian developers made tiktok equally apps that make more interesting indian users needs. So here is complete list of tiktok indian alternative apps lists.
- 1 Roposo Tiktok Indian App
- 2 Moj App by Share Chat Tiktok Another Version
- 3 Mitron Tiktok same indian app
- 4 List of Alternative TikTok Apps
- 5 Why TikTok alternative apps
Roposo Tiktok Indian App
India’s favourite short video app in Hindi, English, Punjabi, Marathi, Tamil, Telugu, Kannada, Gujarati & Bengali!. It’s almost the same interface with nearby tiktok users.
Mon App by share chat company is the best Indian Short Videos App. Share chat makes a short video indian version of tiktok after being made by its own company.
Mitron Tiktok same indian app
Mitron short video app also became its slogan by , to be the Desh Ki Awaaz on Desh Ka App. Its one of the alternative tiktok app indian version
List of Alternative TikTok Apps
After so many short video apps are published legally after ban of tiktok, but none of them replace cent percent to the tiktok, But some of the indian developers and international gives best replacement of chinese tiktok.
Zili Short Video App
Snaky taka tak
Why TikTok alternative apps
After ban of Chinese tiktok app in india users interested tiktok like app of indian version. Because of developers made tiktok like apps. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474852.83/warc/CC-MAIN-20240229170737-20240229200737-00142.warc.gz | CC-MAIN-2024-10 | 1,645 | 17 |
https://uwischolar.sta.uwi.edu/activity/view/13dddc43-9d79-32be-a387-1e4ad20156e4 | code | Effect of sodium benzoate on the haematological data of Wistar rats.
Priya, R Jyothi and Sridhar, R and Balachandran, C and Manohar, B Murali and others
Murali Manohar Bhakthavatsalam
Indian Veterinary Journal
© Copyright 2021, The University of the West Indies, Campus Libraries. Privacy Statement | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359037.96/warc/CC-MAIN-20211130141247-20211130171247-00315.warc.gz | CC-MAIN-2021-49 | 299 | 5 |
http://themoesfamilyintexas.blogspot.com/2008/09/my-favorite-place.html | code | I had an idea of where I wanted to put it even before I brought it home. This is my entry way. It's a good size and sometimes doesn't have that cozy feel that I want. This cozied it up for me. As for the cute shabby chic board. I'm going to hang onto that until I find a table for the hallway. I want to hang it above the table and either hang my kids pictures from the hooks (since it's next to their rooms) or hang the letters of their first names on it. We'll see...I have to find the table first.
This is another idea I got from McKinney that I want to do:
I like the clear jars with the flowers in them. I want to put a lamp on the table and then 3 of these. The only problem is I don't have any idea what I'm looking for in a table other than the size. It's one of those "you'll know it when you see it" kind of things.
Do you remember seeing some jars (above in the picture of what I bought) with the keys on them? Well, I was moving them around all over the place trying to find a good spot and...one of them broke. So sad, but I'm lucky enough to be saved by my aunt. Thank you Barbara! She went back to get me one and it's in the mail. I will tell you more about those when I have it all finished!
Long post, I know (sorry), but hopefully it was worth the read!! :) | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864725.4/warc/CC-MAIN-20180522112148-20180522132148-00137.warc.gz | CC-MAIN-2018-22 | 1,275 | 5 |
https://www.inveniam.fr/career/senior-devops-engineer | code | Senior DevOps Engineer
Inveniam AI is a pioneering technology company specializing in the development of innovative AI-driven solutions. Our mission is to empower organisations to harness the power of artificial intelligence to enhance their businesses and solve complex problems. We are a growing team of passionate professionals, committed to creating cutting-edge tools and services that transform the way our clients approach their challenges.
We are currently seeking an experienced and highly motivated Senior DevOps Engineer to join our team. The ideal candidate will be responsible for designing, implementing, and maintaining our infrastructure, ensuring the smooth delivery of our AI-driven products and services.
- Design, develop and implement CI/CD pipelines to ensure seamless integration and deployment of our AI solutions.
- Collaborate with cross-functional teams to identify and resolve infrastructure and deployment-related issues.
- Monitor and maintain the health, performance, and security of our cloud-based infrastructure.
- Implement and maintain automaton tools and frameworks to enhance operational efficiency.
- Develop and maintain documentation for infrastructure, deployment processes, and best practices.
- Mentor and guide junior team members, fostering a culture of continuous learning and improvement. Stay informed about emerging DevOps trends and technologies, actively implementing them to maintain a competitive edge.
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 5+ years of experience in DevOps, with a focus on cloud-based infrastructure (AWS, Azure, or GCP).
- Strong knowledge of containerisation technologies such as Docker and Kubernetes.
- Expertise in designing and implementing CI/CD pipelines using tools like Jenkins, Github Actions, GitLab CI, or CircleCI.
- Proficiency in scripting languages such as Python, Bash, or Ruby.
- Experience with infrastructure as code (IaC) tools like Terraform, Cloud-Formation, or Ansible.
- Strong understanding of networking, security, and monitoring principles.
- Excellent problem-solving skills and ability to work independently or as part of a team.
- Strong communication skills, both written and verbal. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510225.44/warc/CC-MAIN-20230926211344-20230927001344-00308.warc.gz | CC-MAIN-2023-40 | 2,228 | 18 |
https://theaussieknittingco.com/collections/patterns | code | Due to the covid 19 situation these products have limited stocks as Countries involved are on shut down so no stock is leaving. If you would like to check first please email us at [email protected].
The Aussie Knitting Co offers a large collection of Knitting Patterns and guides.
We have a large range of resources to help you in crafting specific projects or sharpening your general skills with handy techniques and tricks. From modern fashion to classic tips we have something for anyone. | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499934.48/warc/CC-MAIN-20230201112816-20230201142816-00224.warc.gz | CC-MAIN-2023-06 | 504 | 3 |
http://4archive.org/board/g/thread/52308835 | code | Previously on: >>52295996
Intended for users of all levels, including absolute beginners.
There are three ways to try Linux, you can:
1) Install a Linux OS on a VM (Virtual Machine/VirtualBox) for "safety purposes"
2) Use the Live ISO directly without installing anything, that way, you can get a "full Linux experience".
3) Dual-boot Linux with Windows/Mac (recommend if you want to learn more about Linux)
4) Go balls deep and overwrite everything with Linux (not recommended)
If you are serious about switching to Linux and if you have Windows dual-booted, we recommend you use it exclusively for 2 weeks, and avoid Windows dual booting for that period of time, or it's likely you will start retreating back to windows instead of getting used to Linux as your new home and working onnmaking it feel the way you want it.
Before asking, please find the answers to your questions in resources.
Please be civil, notice the "Friendly" in every Friendly Linux Thread.
Understand that much of your software from Windows will be unavailable, although maybe wine can make up for it.
man <insert command here>
your friendly neighborhood search engine
What is Linux (or GNU/Linux for Stallmanists)?
Babby's First Linux (What distro to choose?)
What software does /g/ recommend? (Please DON'T include the so called infographic [it's reddit-tier] -- refer all your recommended software here.)
Ricing on Linux (Make it good and functional or make it worse/puke-inducing like those at desktop threads)
A script designed to ease the transition from Windows to Debian
Check out this page for any updates on the OP
IRC No one uses:
Under themes like Adwaita and Paper, all window decorations are the same size. I have no idea why.
My biggest problem with Arc isn't that, but I wonder what sets Arc different from Adwaita in terms like this. People say client side decorations, but how can themes overide this?
Arc looks like what is intende to be.old decorations are smaller with file/properties wtc.
Paper forces the old ones to be bigger so they fit.But its wasted space if you think about it.
Friendly reminder to not listen to some dude on /g/ and instead figure out which distro you need by looking around on google and basing your decision on what kind of user you are.
I've still got it. Don't worry. Glad I took the time to write that up.
Is it actually better than MATE?
I use MATE because:
a) I started with GNOME 2
b) doesn't require accelerated graphics
c) it has some nice themes.
(repost from last thread: my modified ubuntu 10.04 Ambiance/Radiance themes with some GTK3 support added)
Fedora's not my cup of tea but I appreciate it.
Windows 10 is shit. That's not debateable. It invades your privacy, and you can't even disable automatic updates via registry in the home edition.
When I'm forced to use windows, I'm sticking with 7. I stuck with XP pretty close to the end.
I like the "wasted space" because it creates a "unified experience" (for lack of a non-buzzword). I big complaint about windows 10 is that certain UI elements cannot conform to a standard. It would be nice to avoid that.
My biggest beef with windows 8 and up is that it's impossible to change the fonts the UI uses through non-arcane binary-stored-in-registry entries.
My next biggest beef is privacy invasion, followed by 'cloud integration'.
Why do you guys reply to that windows shill in every fucking thread even though he posts exactly the same thing every single time? It's not like he's open for debating that shit neither, he just responds with greentexting and memes like a /v/ermin he is.
My biggest beef (for my laptop) is HIDPI support. I'm okay with shit HIDPI support so long as it's readable. In Linux, it is, or you can make it to be readable very easy. In Windows, it isn't readable worth a damn. I love how half of my programs are a blurry mess. Thank you Microsoft!
Just so we're clear left is new decoration and right is old. So your problem is that the old ones are smaller in arc? The thing is they are smaller but they take more space than the new ones when you show File/Edit menu. I think it has to be this way. Paper looks very weird on my machine desu. I think it's also deprecated
To me it sounds like you need glasses. However, yeah, that's a real problem now.
Of course, on windows 7 and lower, it's a piece of cake to fix fonts.
M$ just loves removing functionality though. Even 7 is missing the MIDI mapper that they took out after XP. Real pain in the ass for MIDI enthusiasts.
Paper looks correct for me.
It shouldn't be this way, plus the default theme does it correctly.
Check on your icon theme
You may have installed the Paper icon theme as well.
Also, does the Paper Cursor theme do anything?
I like Kubuntu but I can't stand APT, are there any RPM based distros that just work without too much configuration? I liked opensuse but the multimedia codec support was terrible and confusing. Korora seems like a good choice but I'm not sure, any opinions on it?
You only have to read it if you feel like it. I'm not going to say you have to. But I read through the anti-linux guy's "reminder" to write this, and his is at least as long. My font's just larger.
Still, if you want to know why linux compatibility is nowhere near as terrible as he says, read it.
adobe flash has always been garbage, this is why nobody uses it anymore.
Switchable graphics have been fixed, for several years now.
keyboard shortcuts work better than on any OS out there
There is easy ways to install software not in the repository, but the solutions wary depending on the distro
No I don't play games.
Microsoft office works fine
>crucial windows applications
File sharing is easy on linux. There is several good options actually.
mounting an android phone works fine but most people prefer to use networked transfer
>advanced things like rules for when to switch graphics, nonstandard display settings and multiseat setups require configuration.
The fact that you can configure those things from a readable config file is awesome compared to the most basic things you have to configure through the registry on windows.
Don't know about iphones, but since you can manage all other e-gadgets over networked applications, the only one you need physical access to is printers and I have yet to encounter a printer that didn't work on linux. (with way less configuration than on windows)
The only valid point was that some applications don't work.
There are replacements for for those mentioned but as long as nobody wants to pay the same amount of money for the development of free applications as they pool into proprietary applications, they will not be as good.
Korora is for Fedora what Manjaro is for Arch. Don't bother.
On fedora you just enable rpmfusion free+nonfree and install media codecs. It doesn't ship with multimedia codecs due to legal issues (Fedora belongs to redhat and as a US based company it has to acknowledge retarded American patent law).
dpkg/apt breaks dependencies a lot.
At least in did on wheezy/jessie. It's a rather mediocre package manager.
I have to admit, rpm/yum was good but slow, but since dnf replaced yum, it's a godsend.
>aigh senpai let me fetch a frontend to a frontend to my package manager
>this will surely solve all our problems.
if you doesn't like inconsistent command syntax, why not just go for a better package management system alltogether if you are going to learn a new tool anyways?
>Archfags shill a new good program on /g/
>apt-get no results
>search hours for a PPA
>last build 4 years ago
LTS never again.
Please don't shill babby linux'es. Babby wants working shit, not packages from 1992.
what software anon?
PPA are not exactly a hard thing do get into, you just provide a source and they compile shit to make it accessible to everyone.
Similar to Fedora's COPR buildsystem
Shit distro built on top of a shit distro.
>can't even maintain its site properly
>bug on forum website, SSL expired, some webdev, who has nothing to do with distro deveopment gives shit advice.
Wow it's fucking nothing.
>implying other projects don't have any bugs, did shit before
Hurr durr Manjaro a shit, but still suggesting fucking Amazuntu for babbys first linux? The fuck?
OH YEAH THEY REMOVED IT AND IT WAS ONLY UNITY
What a bunch of meme posters.
Come back when you know what you're talking about and dont need to parrot /g/ - Random memes
>SSL expired, some webdev, who has nothing to do with distro deveopment gives shit advice.
SSL cert expired and the team behind the project allowed this extremely bad advice to be posted on their front page. Your realize setting your clock back doesn't just "fix" that site right it also can potentially break thousands to millions of others and compromise the security of your users right?
>/g/ hates manjaro because muh sekrit arch club
arch has a shitload of documentation on it, and has highly active forums. it is a famous distribution, right up there with Debian, Ubuntu, and Mint. anyone who thinks it is a secret club is an idiot.
>manjaro opened the door for newfags, 1337 /g/ doesnt like that
manjaro defeats the whole purpose of arch, that's the problem. if you want a user friendly distribution, then go with one that has rightfully adopted such reputation. if you want a more, not advanced, but intermediate distribution, then go with arch. manjaro is just an awkward compromise desu.
How come, every time when nobody is asking a linux related question, /flt/ starts to bloody murder itself?
No I read his comment but I don't agree with it the project's maintainers should be the ones green-lighting these posts to the front page. If they allowed that post that just make them look really ignorant. If they don't filter their posts to the front page and just let some indian web"dev" to post whatever shit he wants to the front page that could be worse.
I'll be honest, /flt/ is pretty much the only reason why I still come here.
/g/ is fucking doomed and I absolutely despise all those fucktarded consumer-oriented bullshit repeated over and over and over again by bored /v/irgins.
I have nothing to do at work for most of the day, so I just keep it open in the tab, hoping I could help some poor soul so he could have the "Oh my GOD this is great" moment we all hope for when installing linux for the first time.
When is Wayland going to have sticky keys, and who the hell is supposed to implement them anyway, say, in a wayland-wlc-sway setup?
My keyboard is broken, mostly with modifier keys not working when pressed at the same time as some other keys (not all, though). I use sticky keys as a fallback for that.
In any case, seems like no Wayland for me yet, unless I'm willing to use everything in a XWayland setup.
In fact, when does Wayland get suitable for practical use anyway?
this is related section in my Xressources
URxvt.clipboard.copycmd: xclip -i -selection clipboard
URxvt.clipboard.pastecmd: xclip -o -selection clipboard
works inside urxct but not in VIM
arch is the best beginners distro there is. spend a week at most in a live USB environment to learn the command line, then just go balls deep into arch.
best way to learn linux imo, and there is an infinite amount of resources to refer to.
I just mess around with highlighting/middle-clicking, ctrl+c/ctrl+v, and ctrl+insert/shift+insert.
Something eventually works.
Shift+insert seems to get the job done m ost of the time. If you've highlighted something in the terminal, middle-clicking somewhere else will paste.
>In fact, when does Wayland get suitable for practical use anyway?
It's still like 4 months till Fedora 24 release. It's scheduled to be the first major distro that comes with Wayland as a default display protocol.
Even though Gnome 3 and default gnome3 applications support wayland, everything else will require XWayland compatibility layer to be used.
The sooner we ditch X for Wayland, the better, but we still have a long time before it gets fully adopted.
I would consider X to be obsolete when RHEL adopts it as default and Nvidia/AMD having proprietary drivers for it.
Till then, it's all beta testing and getting it "to work"
>Even though Gnome 3 and default gnome3 applications support wayland, everything else will require XWayland compatibility layer to be used.
Wait a minute, don't you mean qt5 and gtk3?
I thought these were the ones that were independent of both X and Wayland.
What games are you guys playing?
Just bought Saint Row 4 and KOTOR 2, but I think I'm going to play Borderlands 2 tonight. Need to finish at least the main story
What releases/ports are you looking forward to this year?
Not starting a Linux thread there...
I think you meant to ask it in >>>/sqt/
No, absolutely not.
Just be careful, avoid electrostatic charge that wrecks havoc in delicate electronics.
Also make sure to have thermal paste (a small tube of 5g will last you for many uses) , it's a good practise to replace it when taking off the radiator.
Common sense anon
KOTOR2 is brilliant. As long as you use Content Restoration Mod and accept the fact it isn't fully done anyway. Still, it's brilliant.
I mostly play Crypt of the Necrodancer, emulator stuff, occasionally Kerbal Space Program and my dorm neighbor recently hooked me on Minecraft on his heavily modded server.
Also, remember running Sid Meier's Alpha Centauri for a game once.
World of Goo
Puzzle games (like tis 100
and n64 emulator (zelda and mario64 pretty much).
I use a ps3 controller for the emulator, the fact that it works right out of the box on linux is kind of nice.
But I play games very rarely
I've had broken dependencies in apt only when I was a complete fucktard noob.
Not had it happen in about 3 years.
>I'm ascared of the CLI
>I don't know how to apt-cache search
>not knowing how to compile a fucking program
Consumerism is a disease. This is true.
Behold, my laptop of 8 years. Still works fine.
You can change coolers without damaging the cpu. Just turn the PC off while doing it. Also pay attention to the thermal paste. Sometimes it sticks to the old cooler so it might need to be replaced.
Same. I have a key from when I bought my GTX 960. Wondering if it'll work for a Linux version because I never redeemed it. On the plus side, if they take forever, I'll have a Pascal card by then
>Behold, my laptop of 8 years. Still works fine.
Same here, 8 years, still going. Last year replaced the dead battery.
I think It'll probably serve 2 more and maybe I'll replace it with something newer.
It just looks like trash, grey silver painted with shiny silver doesn't work well in the long run.
If I could get my hands on on some space-efficient, used office prebuilt for cheap It would be great.
I would add an SSD to boot the OS and call it a day
nigga who tf gives a shit about this? truth is some people like using Linux, some like using Mac OS, some like Windows why tf you think some shitty lil numbers gonna change their opinion? It aint a logical decision, it's a feeling you get when you use the OS, and some numbers aint gon change that familia
>arch is the best beginners distro there is
When I started I couldn't figure out how to compile a program or why gcc couldn't find a header (I didn't have a '-dev' version of a package installed). I was around 12 at the time, but it's NOT a good idea to start with arch. I could barely figure out Ubuntu (which was not total shit back then).
It won't damage the CPU. You'll want new thermal compound though.
I just use stock - the i5-3350p's stock fan is pretty quiet and the CPU itself runs pretty cool anyway (69 watts TDP).
Quake (1), Pokemon Blue, Castlevania III, Parasite Eve, Star Wars Battlefront (the old one, not II and not the EA one), since it works in Wine really well and you can configure your /etc/hosts file to do multiplayer without gamespy.
I've played Red Eclipse some, of course. Was pretty good too. I just never got bored playing Quake.
Neat. Mine's a Dell Latitude D630, btw.
I can see yours is Fujitsu, I've never had one of their laptops but I've heard they're pretty great.
...is that a 4:3 screen? What resolution?
I've also got a Latitude D610, which is older by a year, but runs at a lower temperature and has a parallel port (great for old legacy DOS programs and hacking). It has a 4:3 1024x768 screen.
I love 4:3, personally. Wish they made larger resolutions of it.
There's a desktop 1:1 1920x1920 LCD out there (EIZO), but it costs big bucks.
My latitudes both came from my old high school - I was friendly with the IT department and they let me pick up a stack of broken laptops. I swapped parts until I had several working ones.
Which numbers are wrong? I don't remember giving specific numbers, just saying that the claim that "almost all" devices did not have software support in linux was wrong.
>I can see yours is Fujitsu, I've never had one of their laptops but I've heard they're pretty great.
Well, I can't tell but from my experience, fujitsu's consumer models are higher quality than consumer models from toshiba or dell or hp
Its an Amilo Pi2530.
>...is that a 4:3 screen? What resolution?
16:10, 1280x800, so pretty weak. I hope to replace it with a desktop monitor, at least a HD one.
3:2 would be the perfect ratio tho.
>My latitudes both came from my old high school - I was friendly with the IT department and they let me pick up a stack of broken laptops. I swapped parts until I had several working ones.
bought it used, 500GB drive full to the brim with westerns and spaghetti westerns.
By far the biggest culprit with this device is the fact that originally it shipped with Vista.
And because some retarded deal between intel and Microsoft, it requires AHCI drivers to be installed.
So in order to install XP on it I had to make a custom installation ISO with nlite.
And in order to install 7 I had to take another fucking computer, put the HDD, install windows, use regedit to enable AHCI mode and then put the HDD back in fujitsu.
Linux doesn't have this problem fortunately.
>Well, I can't tell but from my experience, fujitsu's consumer models are higher quality than consumer models from toshiba or dell or hp
Good to know.
I think the latitude counts as a business model - not consumer. In any case, aluminum frame/casing, a trackpoint clone, serial port on the back, IEEE 1394 port on the side, and an almost thinkpad-tier keyboard (and it has the nice normal-positioned navigation key cluster). I love it.
I had an inspiron 1420 at one point (consumer dell), it died about a week after the warranty ran out. Never again.
I thought it was 4:3, must be your lens or something.
My D630 has a 1280x800 screen as well, but I think it may be possible to get a 1440x900 screen in there, based on the way my controller board looks.
nice. Mine's a lowly 120. I've got ~6 TB total of hard disks in my server desktop though, so that's where I hold most of my stuff.
>By far the biggest culprit with this device is the fact that originally it shipped with Vista.
Ah, yeah. the AHCI thing. My laptops have a BIOS setting to switch to IDE emulation - it's what I used when I needed XP.
Now it's debian sid only, though.
Phone tethering's a safer bet, I think, but I've only ever tried expresscard modems. Maybe USB is a bit better. Sorry I can't help more.
Are you sure it's not kernel drivers?
I'm using a USB 4G modem too, and once I took care of the drivers, it basically started to manage it all by itself.
Mine used Wimax drivers, I think.
It's just that drivers are technically part of the kernel. It's just that not all of them are included by default.
In any case, what kind of modem do you even have? Also, try searching for "wimax" in your package manager.
>what kind of modem do you even have?
huawei E5372 "mobile wifi"
it's connected to 4g, plugging it in usb starts charging its battery and lets the computer use the internet connection
it shows up as an ethernet device in windows when the drivers are installed
>try searching for "wimax" in your package manager
i don't have internet connectivity on the computer i want to install linux on
i'm in a ubuntu live session and haven't installed it yet since i don't know if i'll get the modem to work
There's shitloads of ports of the original doom engine on Linux.
Hell, the actual source code released by carmack under GPL which all doom engines and clones in existence was a Linux version.
If you want to play vanilla doom1, doom2, hexen heretic try chocolate-doom port, it should be in every distro's repositories by default.
For more advanced functionalities and modern WADs there are other ports as well.
What is a Linux native game with decently good graphics? I have a VMware VM going and I want to test the 3D capability? I can't get Steam to start for some reason, Google tells me it's because of their packaged libs. So it'd be nice if it was non-steam.
Well, you could just download some casual shit like dota, try out different graphical presets and check the framerates you're getting on them to determine what kind of performance you're getting.
That's at least what I would do.
Try out installing wvdial, or modemmanager, or usb_modeswitch if the ones before don't work.
You would need Internet to download these, though.
Apparently many Huawei models work both as USB stick contatining (useless for you) Windows drivers, and as modem, but it seems your Linux correctly detects it as a modem. usb_modeswitch is supposed to "flip" the internal switch, but not sure if actually the problem.
I don't actually have any idea if what I just said was good advice. It's just what Google said. Might be worth a try.
Always create a large ntfs-formatted "media" partition if you're dual booting. Linux and Windows will both be able to write to and read the shared partition.
You'll also want to use the following variables for NTFS:rw,iocharset=utf8,big_writes
I made a special flowchart that hopefully will help in clearing all your doubts whether you should or should not use Linux on a desktop computer.
If you have any questions, feel free to ask, we're here to help you.
>>arch is for webbs
just flip the switch
>fedora has no packages
>>free as in freedom distros don't work
well, if you're dedicated to freedom you should start with the hardware.
>>lossing shit when someone mentions manjaro
fuck off manjaro dev
gentoo is a source based installation with a source based package management.
Installing packages isn't like in binary based distros where you just download the package and it installs.
On gentoo you download the source code, compile on your machine and then install.
So good luck anon, depending on your PC, firefox can take up to 2 hours to compile
latest ubuntu downloaded from their site about an hour ago
i did "usb_modeswitch -J" with the device and re-plugged it
now it shows my ISP and "LTE" in this dropdown but won't connect to the internet
the wlan from the device still works though but my desktop doesn't have that
Use kubuntu, then.
Personally, i use Antergos and Manjaro.
I started out using Ubuntu then Linux Mint. Then, i wanted free internet so i started cracking with BackTrack3.
Then i tried some other distros like Elementary OS, until i reached Arch Linux and fell in love with the AUR.
So, there you go, I'm not some Ubuntu fanboy, I just recommend noob distos to Linux noobs.
Didn't this used to be called the Friendly Unix Thread?
My server's boot drive is failing, I'm going to take the chance to be rid of the Debian meme forever and install FreeBSD.
Am I to understand that I can only mount an ext4 filesystem as read-only on FreeBSD? The only information I could find regarding the matter are forum posts form 2011, and a lot can happen in 5 years.
Well, technically, nothing stops you from installing binary packages in case you have them.
I'm not sure if portage holds binary packages somewhere in a mirror either since I never used the feature.
In any case, while some packages are really heavy, most of them have a binary version as a separate package, and if you try to leave the heavier ones for night with --keep-going parameter, it's not actually that noticeable, since most packages compile in a minute or two on my i7. Libreoffice and llvm are the worst exceptions, really; even the firefox is not actually that bad, but libreoffice compiles for hours. Kinda tempted to use binary version from now on.
Oh, also you can use whatever other package manager you want there too, but in this case it would hardly be gentoo. Half the Gentoo is just the portage and the system it brings.
Sorry if this reply is slow, I'm posting on Android
These varibles make a ntfs partition:
>readable and writable
>Use utf8 encoding for filenames
>Enables big, concentrated writes which lowers CPU usage on Linux
>stability by requiring an erase and fresh install every few months
meanwhile, my rolling release install from three years ago still runs, and the one time I tried upgrading to another version of Ubuntu (by changing my sources.list) I ended up with a system that couldn't even start X11.
w/o steam? I don't know. What distro is this that can't handle it?
Oddly enough my problem is the reverse: my packages are too up to date for vmware to be able to run on my Linux host.
>>52312168mount -o rw /dev/sdWhatever /mount/point
or if it's already mounted at for example /mount/point, you'd domount -o rw,remount /mount/point
>cant find any firewall online
Anyone set up the unified AMDGPU kernel driver with Powerplay support for newer AMD cards?
You need to reconfigure the 4.5 kernel and install the latest mesa and llvm.
There's a PPA for ubuntu 15.10 for the latest mesa and llvm
This is a fresh debian net install with only SSH and System Tools installed. No GUI, no nothing.
Why the fuck is 1.6GB already in use?
Can I have a quick resume on how works startx, initx and their differences?
I got (after try a lot) to put a DE as I wanted from having nothing (old debian version, no GUI at all, just terminal) and I had to use the defaults conf files, although I read lots oof times I should use my home's.
yeah I have no idea.
I set up a media center with Plex on it and it idles at 90% memory usage. Granted thats only got 1GB of memory on it so I thought it was all the extra services I didnt need, so I threw this onto a VM with only SSH and it's still going nuts.
Share a folder on your centos box and use NFS.
Or use Samba. I personally find Samba way fuggin easier to set up and use.
Then just mount the remote folder and default your storage to it.
I am getting random crashes while watching videos in ubuntu. The whole screen goes red and the computer hangs completely. I don't have this problem on dual booted windows.
Suggestions? Buy new video card?
Here is my ancient video card:root@x1x-VirtualBox:~# lshw -c video
description: VGA compatible controller
product: Juniper XT [Radeon HD 5770]
vendor: Advanced Micro Devices, Inc. [AMD/ATI]
physical id: 0
bus info: pci@0000:02:00.0
width: 64 bits
capabilities: pm pciexpress msi vga_controller bus_master cap_list rom
configuration: driver=radeon latency=0
resources: irq:31 memory:d0000000-dfffffff memory:fbbc0000-fbbdffff ioport:b000(size=256) memory:fbba0000-fbbbffff
I want to edit some android kernel files and I'm using Sublimetext3. Loading the entire kernel as a project, in the defconfig file there are flags like
and then in the source files it should only enable any code inside #ifdef CONFIG_CPU_FREQ and gray out the others. Is this even possible? Is there a a plugin for Sublimetext or any other C editor that can do this?
Something weird is happening to my wired internet connection.
-Was using ubuntu mate, internet worked fine.
-Format and Install xubuntu, connection is slow, webpages take a few second before they start loading.
-Format and reinstall ubuntu mate again, the problem still persists.
What the fuck did I do by installing xubuntu? Any idea?
How do you customize where packages are installed or is this something I should do?
I don't want every app I download to go to places where I would forget them. I'd rather keep them in a single /apps directory and add them to my path. Should I manage things this way, if I should what is a sane way to do this?
when you run a the configure script you can set the location with --prefix= option
Your package manager usually installs to /usr/bin (--prefix=/usr)
If you want a separate directory for the applications you've compiled use --prefix=/usr/local/
packages (or at least symlinks to them) usually end up in /bin or /usr/bin, both of which should be in your path.
Worst case if you remember part of a package name you can just grep those two folders.
Usually tab autocomplete is good enough.
If you're talking about the binaries, just mv them to your /apps directory and add /apps to your PATH.
If you're installing from source, use --prefix=/apps in when you ./configure and read up a little bit about the FHS.
I'm using NordVPN but I have the problem that it sometimes disconnects
Is it possible to cut internet connection entirely evey time it does this so that I don't torrent with my own public ip? Or are there any other things I can do to prevent this?
Also what do I do against evil chinese bots that want to acces my server? install fail2ban?
>Should I manage things this way
Not really, unless you have a specific need to do so (no write access, for example.)
Since you're doing it just because you're likely to forget, you probably shouldn't. Read the FHS article on Wikipedia, and use man "program" if you find yourself forgetting where config files are stored.
This empty command terminal launches and returns three empty lines after post and before my os loads. It started happening after I installed linux and now I cannot seems to remove it or figure out what it is. I have reflashed my bios, formatted my hard drives and installed mint17.2, debian 8.2 and windows 7. Does anyone know what it is and how to remove it?
>installing everything from scratch, even large packages such as firefox
If there is a botnet in there, you're certainly never going to find it all by yourself.
If you don't trust a distro to make clean packages, why the fuck to you trust them to make a clean OS?
Maybe I'm talking out of my ass, but I#m going to out on a limb here and say it's just your bootloader, and you shouldn't worry about it unless it takes a particularly long time to exit the state.
it should be a symlink to /usr/bin
Are you retarded or merely pretending?
>/bin: Essential command binaries that need to be available in single user mode; for all users, e.g., cat, ls, cp.
>/usr/bin: Non-essential command binaries (not needed in single user mode); for all users.
I don't think you should be arguing with the FHS.
Don't actually do this.
While Gentoo does have its benefits, the whole reason behind "lel install gentoo" is that -- because it's a source-based distro -- you have to build every package by yourself, and it takes a shitload of time for very little benefit.
If you want to "learn" the hard way, go with Linux From Scratch.
What fucking distro are you using that /bin is a symlink to /usr/bin? /bin is for executables available in single-user (rescue) mode. /usr/bin is where your self-installed programs go, available in multi-user mode.
"You may want to add android-studio/bin/ to your PATH environmental variable so that you can start Android Studio from any directory."
I'm trying to download Android studio for linux. What does this mean exactly and how do I do it? Sorry I'm a noob
Anyone familiar with i3? If I have a terminal window open and I run a windowed application like a web browser, it opens in a separate window. Is it possible to make it run in the same window as the terminal that opened it?
Basically android studio is a bloated piece of... well it's not the worst IDE in the world, and it's better than any other option for developing for android but anyway.
Usually packages are put into /usr/bin, which is in your path, so if you type a program name the terminal knows to search in the folder than contains most of them.
Android studio is installed wherever you like really, but to avoid fuckups thats either in your home directory or maybe /opt.
It won't be in your path though, so in order to start it up you have to remember exactly where it is if launching from the terminal.
However, if you add the directory to your path, you can just start it up with ./studio.sh from anywhere you like.
You add things to the path like so:export PATH=$PATH:/wherever/you/installed/it/
This will only apply to your current terminal, so you'll need to add it to a startup script to be permanent.
To be completely honest, you'll probably want to change directory into where studio is located when you use it anyway, so I never actually bothered with this myself.
I was thinking some way to push it into the background or something. Tabbed mode works, but I didn't want a bunch of useless terminals open just being used to run applications.
Maybe a better question would be if I can run multiple programs as background processes from one main terminal? I'm kind of new to Linux.
>Maybe a better question would be if I can run multiple programs as background processes from one main terminal? I'm kind of new to Linux.$ myprog arg1 arg2 </dev/zero &>/dev/null &
You can send any program into the background by adding a "&".
urxvt is now sent to the background.
You can check for background jobs via
Then you get something like:+ Running urxvt &
You can bring it back by calling it's ID:
Then you can close it as usual via CTRL+C
Guys, whats the go-to music player to install in a fresh Debian GNOME setup? I don't want any strange function, just something straight forward. I wanted to install the eOS default player but he isn't available in the Jessie repositories yet.
Since you can do everything you want on every distro, there is no "best" one.
You can install ubuntu, remove untiy, install i3 as hard example.
Good for older PCs is not to use heavy DEs like GNOME or KDE; choose something lightweight like Openbox or a tiling WM. The distro doesnt matter; they all run perfectly on old hardware.
Trying out Deadbeef right now.
I like that it's very dependency-lite. Besides the music format dependencies, it pulls almost nothing else. If you already have a music player, you'll probably just need to download/install the actual Deadbeef client and you'll be good to go.
Its funny how the installation wiki of antegos is a broken page since 3 years and no one memes about it
I gues the manjarofags are not as much in the edgy teenage stage as the antergosfags.
Kororra went down atleast 3 times
Even mint went down a few days ago
Just deal with it. If you are not exclusively using Mint or any other practical distro you are a college student or a typical NEET
I won't go into the details, but essentially:
1) Add the infinality repositories listed in the link to /etc/pacman.conf
2) Add the key ID to pacman's keyring
3) Refresh the databases with "pacman -Sy". You should see the infinality repositories listed
4) Install, via pacman (so, pacman -S), infinality-bundle, infinality-bundle-multilib if you're on x64, ibfonts-meta-base, ibfonts-meta-extended and ibfonts-meta-extended-lt.
5) Restart Xorg (either by killing and restarting it or just rebooting).
Hello /g/ I need your help.
My debian installation is taking forever to boot because it hangs for several minutes on "A start job is running for Create Volatile Files and Directories" I have tried googling the problem and most forums came up with the working solution of deleting and re-creating the /tmp directory:
Thing is, I keep getting the "cannot remove /tmp' device or resource busy" error. I have tried lsof and umount as listed in this forum: https://unix.stackexchange.com/questions/11238/how-to-get-over-device-or-resource-busy
but none of that worked either.
This is my output for df:Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 117913932 111901208 4 100% /
udev 10240 0 10240 0% /dev
tmpfs 1630924 9984 1620940 1% /run
tmpfs 4077308 1232 4076076 1% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 4077308 0 4077308 0% /sys/fs/cgroup
tmpfs 815464 16 815448 1% /run/user/1000
Why the hell is my shit in firefox so fucking small? I changed literally all settings in preferences and nothing helps except the zoom keyboard shortcut but I have to do it every time.
How do I add the KDE menu to cairo-dock?
So I just installed arch on my odroid X and I want to use it for airplay audio from my iphone
Ive installed kodi and followed this guide but im stuck
how do I get airplay to work? people online said it should just werk™ but it isn't
>calling my past self a complete fucktard
>1.6GB in use
This site could help explain things. It's using RAM you aren't for disk catching. Linux will fuck off and let you use that space if you start approaching 100% usage.
Reading this, it's quite entertaining
I'd like to change my SO. I'm currently dual-booting Windows/Linux. Distro is Debian 7.9(wheezy) with KDE. It's quite fucked up, lots of issues(some that I could solve) and is not nice at all.
What does /g/ suggest?
could be you have the 'quiet' option in your linux boot. Remove that from '/usr/share/grub/default/grub' or manually remove it by entering the 'EDIT' mode in grub when you boot.
If you change the config file, to make it permanent run 'update-grub' afterwards.
>Arch just symlinks both /bin and /sbin to /usr/bin
dear LORD, that is terrifying. Debian at least got one thing right: It kept the single-user mode holy.
I am so sorry you have to live that way, where if a sh update breaks then you're SOL.
I'm trying to make my own commands for terminal and have a little trouble. I've created this at ~/bin:#!/bin/bash
alias alt='ls -alt"
echo " text is here"
I type in the file name 'lalt' in as a command and "text is here" comes up fine but when i type alt nothing happens. Do i have to put the alias command in .profile or am i being retarded and missing something?
>don't care about being bleeding edge, want something that I can set up and then not mess with
>...but not quite as old as Debian stable, that's just a bit too old
>it's a plus if it comes in an Xfce version
what distro do I want?
So recently i've been trying to get into using the web crawler framework scrapy. I use pip install scrapy and it goes fine but when I try to run the command scrapy startproject tutorial it tells me that scrapy isn't installed at all.
Why would that be?
I do feel comfortable with Linux. Should I jump to the latest Arch or just a stable version?
Also, how do I list the installed packages on konsole? I'll probably install all the build-essentials+opencv+matlab and some latex editor. Thanks in advance
About IceCat browser, what's the difference between these two releases:icecat-38.5.2-gnu1.tar.bz2 2015-12-26 05:49
icecat-38.5.2-gnu1.tar.bz2.sig 2015-12-26 05:49icecat-38.5.2.en-US.linux-x86_64.tar.bz2 2015-12-26 05:49
icecat-38.5.2.en-US.linux-x86_64.tar.bz2.sig 2015-12-26 05:49
I'm on limited data cap so if I can save 180MB I wouldn't mind it, but I guess there's a reason why that pack is so big? I installed OpenSUSE last week and I want to try some FF alternative for once, this looked nice.
About OpenSUSE, I used the version that comes with Gnome DE but I think it was a mistake, it looks very minimal, can I install another DE without reinstalling OpenSUSE from scratch?
I wanted to try Cinnamon though I cannot download it from the Linux partition because I didn't fix internet for it yet and I don't plan to use internet on Linux till I feel safe enough (I probably left some backdoor-tier settings enabled).
How should I go about installing Cinnamon manually? terminal or something else? I'm on dual-screen setup if it makes any difference.
Also about the OpenSUSE, I changed the root password from terminal and it works well, but it still keeps asking the old password on the user login screen. How do you change the login password?
Last but not least question: opinion on OpenSUSE distro?
>Should I jump to the latest Arch or just a stable version?
What? There's literally no such thing as "stable Arch", it's always latest.
You download a monthly snapshot, update the packages, you got yourself the latest Arch.
The reason i mentioned .profile was because there is no .bash_profile or .bashrc listed when using ls -a, nethier is .bash_aliases but i imagine that's something user created. I've added the aliases to the end of .profile though and still nothing. On Mint, Cinnamon 17.3
damn small linux is smaller.
Still, point taken, arch is relatively small.
Dual boot for learning.
Debian sid. I'm not taking a poll. Use debian.
Last I checked, outside of windoze 10 ctrl+c in a windows comand prompt was 'abort' as well. Not even sure how you abort programs without doing a ctrl-break in 10 now.
anyway, ctrl+c and ctrl+v have ascii control sequences in linux/unix that go all the way back to the first terminals.
Copy/paste is usually ctrl+shift+c and ctrl+shift+v.
Pictured: my old terminal, on right. Model IBM 3161.
Debian sid and do it right from the start this time. Wheezy is pretty outdated now.
Debian testing or unstable.
Not out of date, XFCE's an option.
they both forked mplayer - they did not start over. They inherited whatever bad code mplayer may have.
One's a signature file, one's the actual tarball with the program inside.
Try MATE, it's awesome.
Anyway, if you want to get a DE and don't have any internet connection, good luck. Either build from source copied via flash drive, or set up internet first.
>One's a signature file, one's the actual tarball with the program inside.
I forgot the size, pic related.
>Try MATE, it's awesome.
It it compatible with OpenSUSE?
>Either build from source copied via flash drive, or set up internet first.
I have some experience with compiling stuff in MinGW, is it the same? are there any things I've to know beforehand or can I just ./configure, make and install my way into it?
It's a terminal, it runs whatever the computer it's connected to is running.
Are you unfamiliar with what terminal windows on linux actually do?
Terminal windows like rxvt, gnome-terminal, mate-terminal, xterm, etc. are all actually terminal emulators - they pretend to be real hardware devices made in the 1970s through the 1990s that connected to either a modem or another computer via RS-232 serial cables and provided many users access to the same central computer (this is called a 'time sharing' system). The terminal has very little processing capability of it's own, it acts as an interface between a user and the server/host he is connected to.
I can play text based games with it, like Rogue and Zork, but only because I compiled programs for my desktop PC it's connected to.
OpenSUSE? ew. I wouldn't do it, but yeah, it should be. It's open source and linux-based after all (It's been ported to cygwin's X11 environment too, actually).
It's extremely similar to mingw, if you used the MSYS environment. I actually started there, too. Built ffmpeg.
If you have required package dependencies, just ./configure, make, make install should work.
If you don't have a configure file, try running libtoolize and autoreconf first.
MATE from source is kinda painful and has lots of dependencies, though, so I'd get internet working before ANYTHING ELSE if I could afford it. | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718296.19/warc/CC-MAIN-20161020183838-00546-ip-10-171-6-4.ec2.internal.warc.gz | CC-MAIN-2016-44 | 43,298 | 443 |
https://www.mail-archive.com/[email protected]/msg06284.html | code | sijie commented on a change in pull request #1117: BP-29: Metadata API module
File path: site/bps/BP-29-metadata-store-api-module.md
@@ -0,0 +1,87 @@
+title: "BP-29: Metadata API module"
+state: 'Under Discussion'
+We have already abstracted all the metadata operations into interfaces. And
all the bookkeeper implementations only reply on metadata interfaces,
+rather than depending on zookeeper. This proposal is to organize the metadata
interfaces and its implementations in a separate module and make
+bookkeeper implementation only depends on metadata interfaces, not depends on
zookeeper. This would a few benefits:
+- It allows supporting different metadata storages, without bringing in
dependencies of metadata store implementation directly into
+ bookkeeper-server module. The development of different metadata storage can
be done without interleaving with each other.
+- It would define a clean module dependency between bookkeeper implementation
and metadata api, and how bookkeeper load a different metadata
@jvrao we already used reflection to initialize ledger manager factory.
the performance overhead is only when resolving the class and initializing
the class. once the class is initialized, the calls to all the methods are done
not through reflection.
Hope this makes clarification.
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
Apache Git Services | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814311.76/warc/CC-MAIN-20180223015726-20180223035726-00413.warc.gz | CC-MAIN-2018-09 | 1,547 | 27 |
http://tofoodreviews.com/new-south-wales/how-to-make-a-drive-gpt.php | code | Create A GPT USB Drive Password Recovery
In order to perform a clean install of Windows 8 on my computer, I’ve made a bootable Windows 8 installation USB drive using Microsoft’s Windows USB/DVD download tool.... Click Convert to GPT Disk from the menu and confirm this order when dialog message shows; Click Commit in the first window and Partition Expert will Change the MBR disk to GPT disk while data in the disk kept as before.
Steps required to convert a GPT partition to MBR to allow
Click on the GPT drive to proceed to the next step of this process. Go to "All-Around Recovery" if you can't find your data after the quick scan. After the scanning finished, you can peview the recovered files and see which ones you need.... I'll answer the last question first - I've never heard of needing a GPT drive to install Win7 64-bit. I ran Win7 Pro 64-bit from an MBR drive for year.
Booting from GPT rodsbooks.com
This article provides information about how to resolve an issue with the EFI Bootloader not booting correctly on a GPT Hard Disk Drive for a number of Windows Operating Systems. how to get gta 5 cheap 24/10/2017 · My system at present is MBR. When the Fall update is released I want to convert to GPT. This will make it a whole lot easier. Just let me know when you need a DISKPART script, I can make a custom script to you based on how you want to partition the …
How to Clone GPT HDD to SSD in Windows 10/8/7?
31/08/2018 · I don't understand why people would do clone options unless the drive is dying or you can't do a full back up image. If you bought a new drive the best method would be a clean installation. how to format your hard drive windows 7 Before we show you how you can convert a a disk from GPT to MBR, or MBR to GPT you should know; Both HDD and SSDs can be MBR or GPT drives Changing a drive from one type to the other will result in your drive being wiped clean so be careful
How long can it take?
How to create hybrid MBR/GPT partition on USB drive?
- How to create hybrid MBR/GPT partition on USB drive?
- Installing Windows from USB Stick to a GPT Drive Spiceworks
- Create A GPT USB Drive Password Recovery
- Format HDD to GPT? [Solved] - Storage - Tom's Hardware
How To Make A Drive Gpt
Hard drives and SSDs, whether internal or external, appear to be plug and play devices. End users will rarely have to set up a hard drive for anything. Even if you plan on doing a clean Windows install, the hard drive or the SSD that’s on your system will be ready for installation. That said, you might need to change the partition table for a drive from MBR to GPT, or from GPT to MBR.
- In order to make a UEFI system boot from a USB flash drive, the latter has to be formatted in the FAT32 file system. An official Microsoft utility for creating bootable USB flash drives, Windows 7 USB/DVD download tool , formats a flash drive to the NTFS file system.
- I'll answer the last question first - I've never heard of needing a GPT drive to install Win7 64-bit. I ran Win7 Pro 64-bit from an MBR drive for year.
- The following examples show how to create basic and dynamic disks using the DiskPart command. Example 1: Creating basic disks using the DiskPart command. Select a disk, whether it is Raw or dynamic disk, and convert it to basic storage type.
- If your PC has a GPT partition scheme (GUID Partition Table), then pick the appropriate option from the combobox. Click on the CD/DVD drive icon to browse to the Windows 10 ISO image file. Select the Windows 10 ISO image file - 32-bit or 64-bit - whichever you downloaded. | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999814.77/warc/CC-MAIN-20190625072148-20190625094148-00341.warc.gz | CC-MAIN-2019-26 | 3,567 | 20 |
http://www.bullshido.net/forums/showthread.php?t=116216&goto=nextoldest | code | Apache Knife fighting?
I'm not sure if this is the right place for this question - apologies in advance, just saw plenty of knife references.
Is anyone aware of "Apache" knife fighting as some sort of method/system?
And if so are there any links to the US military in the second world war?
Ah bolox - search function, just ignore this
Last edited by Hexe; 5/03/2012 8:18am at .
Reason: overtired, overworked, underthought
I did a search too just to see what came up. I know a guy that's trained with the Apache Knife guys. Their training methodology focuses on, among other things, sparring with trainers with fake blood smeared on the forearms, to simulate the slipperiness that can happen when fighting. So, that's a consideration that they make in training that I haven't really seen elsewhere. The guy I know is a park ranger, and he's usually carrying like 3 concealed knives.
I'm fairly certain that their style doesn't come from a direct master to student progressive curriculum kind of thing; they knife spar and draw a bit on Apache history. Like most Native American martial arts, its not a pure ancient fighting art and seems to be from the more familiar pattern of a Native American guy learning outside martial arts then appropriating them in his own culture's trappings.
It appears there have been a few books/DVDs published on the subject matter - my interest extends to whether anyone outside of the US that claims to teach this is probably Book schooled, which is say not very schooled at all.
The US Military links I come up with are these, or derivatives of are like this, Permalost - is this the mob your Ranger mate trained with?
Is he legit and effective? I thought the thread was some kind of joke at first, but I won't discount something that I clearly don't understand.
Originally Posted by Permalost | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00266-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 1,825 | 13 |
http://www.vistaheads.com/forums/microsoft-public-windows-vista-installation-setup/12506-bought-wrong-vista-edition.html | code | Re: Bought the wrong Vista Edition
OK. Thanks. I may try it.
"Michael Jennings" wrote:
> Here is my modest proposal:
> Send Vista Home Premium back. Wait for the refund check. When you
> get the $160, then buy the Vista Ultimate upgrade. It gives you a month
> or two to cool off, and it puts an aggravation on Microsoft.
> "SteveB" <[email protected]> wrote in message news:[email protected]...
> >I have an IBM Thinkpad with XP Pro. It passed all the hardware checks in the
> > Vista Upgrade Advisor and IBM/Lenovo has upgraded drivers and Vista
> > instructions. I bought a retail version of Vista Home Premium.
> > IBM/Lenovos upgrade instructions are to do an upgrade but during the install
> > Vista said I'd need Business or Ultimate to upgrade from XP Pro. I went
> > ahead and made a fresh install. Afterwards, I couldn't attach to the
> > Internet. My Ethernet adapter said it was working correctly but there was a
> > second "Other" Ethernet Controller that didn't have a driver. Vistga
> > couldn't find my built in wireless adapter either. After trying
> > unsuccessfully to install drivers all day I contacted MS support from another
> > XP machine. It said I wasn't eligible for the 90 day support and they wanted
> > $60 for an incident report. I suspect it may have been because I was on a
> > separate machine - since Vista wouldn't connect.
> > In any event, I spent the rest of the day rebuilding the machine back to XP.
> > I hate to have wasted the $160 on Vista Home Premium. I may be willing to
> > go the extra for Ultimate but only if I can pay the difference between the
> > two. Or I suppose, I can give up, write of the $160 and stay with XP.
> > Any suggestions would be appreciated. Thanks.
> > --
> > SteveB | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543614.1/warc/CC-MAIN-20161202170903-00453-ip-10-31-129-80.ec2.internal.warc.gz | CC-MAIN-2016-50 | 1,787 | 28 |
https://conferences.oreilly.com/software-architecture/sa-ny/public/schedule/speaker/365002.html | code | Lead Data Scientist, ThoughtWorks
Devangana Khokhar is lead data scientist and strategist at ThoughtWorks. She brings 6+ years of experience in building intelligent systems and defining data strategy for clients across multiple domains and geographies. Devangana has a research background in theoretical computer science, information retrieval, and social network analysis, and she’s written a book on network sciences, Gephi Cookbook (Packt Publishing London). Her interests include data privacy and security, the role of data in humanitarian sector, ethics and responsibilities around data, reinforcement learning, and data-driven intelligence in low-resource settings. Devangana frequently consults for and guides nonprofit organizations and social enterprises on the value of data literacy and holds workshops and boot camps on various dimensions of data. She earned her master’s degree in theoretical computer science specializing in social network analysis from PSG College of Technology, Coimbatore, India.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
Become a sponsor
For information on exhibiting or sponsoring a conference
For media/analyst press inquires | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00683.warc.gz | CC-MAIN-2023-14 | 1,281 | 7 |
https://www.gardengrocer.com/products/index/1/1863-ocean-spray-cranberry-juice-cocktail-64oz-btl/2-beverages/40-juices-shelf | code | Quick and responsive service.received a text when my groceries were delivered to our resort.they made it way before we did and were waiting for us.
Everything was great, really ! Thanks for all your help
I love Garden Grocer! We used them over three times and our groceries were delivered on time. Would recommend them to anyone.. | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447729.93/warc/CC-MAIN-20151124205407-00232-ip-10-71-132-137.ec2.internal.warc.gz | CC-MAIN-2015-48 | 330 | 3 |
https://builtbybit.com/tags/free-plugin/ | code | Basically as title says.
All free plugins will be put publicly on here and/or my github
You'll get access to the SRC but you won't get any updates or changes from me
All information is on my GitHub profile.
Soo, hello, my first post here..
First of all: English is not my native language, so i apologize for any mistakes.
I'm here to code free plugins for you, BUT keep in mind:
- The plugins are free, so they do not belong to you (but you can use as you like.)
- They will be open-sourced plugins
Hello! I am currently offering FREE Spigot plugin development. I need plugins to put for a portfolio on my GitHub, since it is currently empty.
I can provide high-quality plugins because I'm not new to plugin development, I just need stuff for my GitHub and I cannot put paid plugins on there...
So I have been making bukkit/spigot plugins for a while and I heard about this site from another developer that uses this website. I am willing to make a simple plugin of your choosing for just a vouch if I complete it to your liking.
Dm me on discord GoodVibes#1232
or email me at...
Free Plugin Development
I am currently trying to build a reputation on MCMarket, so if you have a small plugin you would like me to make, I'm all in, I don't expect any payment other than a vouch! :)
For your request, please follow the following structure so I know exactly what you want...
Hey! My name is Moshe!
I am a considerably new developer using SpigotAPI and Java with 1-2 months of experience and I am looking for ways to work on my developing skills and build a portfolio and decided to do so by offering some free plugins.
I'll make any small-medium plugin that you're...
-- No Longer Offering! --
Offering development of small-medium sized custom plugins.
6+ years of Java Development
High Quality Plugins
Looking to gain some reputation and vouches, add me on Discord:
I am looking for a thread design for either one (or both!) of my MCM resources, found here:
In exchange for the thread design I will provide a copy of the plugins for free.
Please add me at my...
Hello MCM! I'm trying to find someone who has a really good idea for a plugin that I can put for sale on MCM and/or SpigotMC. I will give credit to you for the idea and if I make a decent amount of money, I'll send some your way.
PM me your ideas! I shall get back to you ASAP.
Hello McMarket, I am offering small plugins for anyone to build my portfolio. A vouch would be expected in return for the plugin unless it is an idea.
If you have any ideas for me, or need a plugin, use the format below:
Name of plugin:
explain what it does:
have you added me...
Hello, I am offering free Minecraft spigot plugin development for small and simple needs and ideas. If you are interested, contact me either via discord or by leaving a message below.
Discord: David M.#9898
My main reason for doing so is to build up my portfolio and knowledge. Thank you for...
Hi there, I have been learning SpigotAPI for around a week now and want some practice, so i'll be offering a free basic plugin, atleast if I know how to do it or can find a method/event for it.
If you are interested there is no guarantee that I will complete the plugin and you may not claim the...
Hello, I am in desperate need of an OG account.
I am an experience developer, willing to make you a completely custom plugin of your choosing, with a range of features from moderate to advanced.
If you are interested please add me on discord @suicide#0001 and shoot me a message
This plugin allows you to set a custom prefix in-game to trigger any command, similar to how a lot of Discord bots allow you to set the command prefix.
e.g. Instead of running '/help' in-game, you could use '!help' if the prefix '!' is set, but you can also set multiple prefixes at...
I'm very new to this site so I'd like to establish my developer skills and legitimacy by making plugins for free. If you're interested my Discord is SentinelGaming#4525 and I will asses your plugin concept.
I am free to decline any plugin proposition and cancel any current work in...
Do you need any help?
SpigotMC: *Click here*
If you experience any issues, feel free to directly contact me.
This simple plugin helps server... | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00317.warc.gz | CC-MAIN-2022-40 | 4,204 | 51 |
https://icwatch.wikileaks.org/search?action=index&company_facet=Exelis&controller=search&page=1&skills_facet=MySQL&tools_mentioned_facet=development | code | Enjoy developing for personal projects to test and learn new technologies for Web, Android and IOS application design, development and testing. Professionally, I am a full-stack agile software developer in wide variety of applications/platforms and a quick study. Software Engineering experience:● Web development using AngularJS, NodeJS, Ruby on Rails, Java Scripts/JQuery, JRuby and Mongo. ● Mobile application development in Android and iOS for embedded platforms and satellite communications. ● UI development using C#, Java, and (QT) C++.● Embedded application development using Python, C and C++.● Desktop application development in Windows and Linux platforms. Systems Engineering experience: ● Building, configuring and managing failover systems, firewalls and networks from ground up. ● Developing scripts (bash and c-shell) to automate on demand needs such as data marshaling, patching systems according to Security Technical Implementation Guide (STIG).
Responsible for leading managing, developing and deploying mission critical web applications and systems using agile methodologies. Duties include providing high-level support to the internal software/test team for all software configuration management, development, debug/test and release activities. In Other projects, developed Android and Windows Compact Framework based custom application for custom communication devices and worked on porting compact framework applications into other platforms. Implemented (Python) drivers for the Atmel AT91M55800A ARM7 Core w/ Nucleus processor for NIST AES Certification Validation. In addition to developing software, also developed/implemented software test plan, built systems and failover clusters for test and simulated production environment, and traveled to the operational site for systems/software deployment. Responsibilities in other projects included implementation of new functionalities in the embedded architecture for the Watchdog project. Building embedded Linux systems that interacts with various other small systems was also part of the development responsibilities. | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659056.44/warc/CC-MAIN-20190117163938-20190117185938-00454.warc.gz | CC-MAIN-2019-04 | 2,109 | 2 |
https://mengnandu.com/publication/kais20/ | code | Recent studies have shown that state-of-the-art DNNs are not always credible, despite their impressive performance on the hold-out test set of a variety of tasks. These models tend to exploit dataset shortcuts to make predictions, rather than learn the underlying task. The non-credibility could lead to low generalization, adversarial vulnerability, as well as algorithmic discrimination of the DNN models. In this paper, we propose CREX in order to develop more credible DNNs. The high-level idea of CREX is to encourage DNN models to focus more on evidences that actually matter for the task at hand and to avoid overfitting to data-dependent shortcuts. Specifically, in the DNN training process, CREX directly regularizes the local explanation with expert rationales, i.e., a subset of features highlighted by domain experts as justifications for predictions, to enforce the alignment between local explanations and rationales. Even when rationales are not available, CREX still could be useful by requiring the generated explanations to be sparse. In addition, CREX is widely applicable to different network architectures, including CNN, LSTM and attention model. Experimental results on several text classification datasets demonstrate that CREX could increase the credibility of DNNs. Comprehensive analysis further shows three meaningful improvements of CREX: (1) it significantly increases DNN accuracy on new and previously unseen data beyond test set, (2) it enhances fairness of DNNs in terms of equality of opportunity metric and reduce models’ discrimination toward certain demographic group, and (3) it promotes the robustness of DNN models with respect to adversarial attack. These experimental results highlight the advantages of the increased credibility by CREX. | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488249738.50/warc/CC-MAIN-20210620144819-20210620174819-00317.warc.gz | CC-MAIN-2021-25 | 1,783 | 1 |
https://community.spiceworks.com/topic/2025116-skype-for-business-disk-full-can-t-find-alleged-data | code | We're running Skype for Business 2015 on a dedicated server. The C: disk keeps (allegedly) filling up to the brim, but we cannot find the data. WinDirStat identifies 57GB of data on the entire 400GB disk, and yet Windows says it's full to the point where nothing works anymore. The disk started smaller and we've increased it, and it just fills up, losing about 10MB every couple of minutes. Anyone else running Skype Business and/or seen an issue with alleged data growth that can't be accounted for? We're inclined to just delete and start over, but we're not convinced it won't just happen again if we don't identify the cause.
Found similar thread from 2012. WinDirStat found the data as "unknown". https://community.spiceworks.com/topic/202369-server-disk-drive-almost-full-but-it-can-t-be
The shadow copy properties on the volume show two entries, both with limits (320MB and 40GB), so that doesn't exactly explain why we're at 342GB of "unknown" data.
Both shadow copy entries say they have limits, but are both using 0 bytes.
Skype has been making logs in C:\inetpub\logs\LogFiles\ and even the domain admin account/session didn't have the rights to identify the data.
Once upon a time, when the top portion of a network I worked on was managed by another group, there was this little server that kept running out of room. All searches provided no reason for this. I booted the device with a Linux live disk and found a mass of French-Canadian music stored on the drive that was set as a locked and hidden admin share. The device had been hacked and was being used as a music share zombie! The device did not survive long after that was found!
This was years ago and they have gotten their act together since. Just something to be aware of when running into these issues. Glad to hear it is only log files. If you can't get to it through the usual way, boot ut under a Live Linux and nuke the files, Linux easily ignores NTFS permissions. | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107887810.47/warc/CC-MAIN-20201025041701-20201025071701-00270.warc.gz | CC-MAIN-2020-45 | 1,946 | 7 |
https://jorgklein.com/2009/11/08/ssis-let-the-excel-connection-manager-pick-the-right-column-data-types-from-an-excel-source/ | code | The excel connection manager scans every first 8 rows to determine the data type for a column in your SSIS source component. So if an Excel sheet column has integers on the first 8 rows and a string value on the 9th row, your data flow task will crash when executed because SSIS expects integers.
Fortunately you can change the number of rows that Excel will scan with the TypeGuessRows registry property.
1. Start Registry Editor by typing “regedit” in the run bar of the Start menu.
2. Search the register (CTRL-F) on “TypeGuessRows”.
3. Double click “TypeGuessRows” and edit the value.
Todd McDermid (MVP) commented the following useful addition:
“Unfortunately, that reg key only allows values from 1 to 16 – yes, you can only increase the number of rows Excel will “sample” to 16.”
“The reg key also allows the value 0. When this value is set, the excel connection manager scans every row to determine the data type for a column in your SSIS source component.”
Thanks Robbert, I think setting it to 0 can be very powerful in some scenario’s!
- TypeGuessRows 0: All rows will be scanned. This might hurt performance, so only use it when necessary.
- TypeGuessRows 1-16: A value between 1 and 16 is the default range for this reg key, use this in normal scenario’s.
Does anyone know, in a loop to read many Excel files, does the excel connection manager RESCAN the first N rows for each file it opens
OR does it set the data types based only on the first file it finds ?
Thanks for sharing this.
Setting the value to 0 actually still only scans the first 16384 rows, which in some scenarios can still be a limitation (as discovered by http://social.msdn.microsoft.com/Forums/sqlserver/en-US/6496b806-c0d9-4ab7-b309-aa34550aaa1d/ole-db-connection-error-failed-to-retrieve-long-data-for-column?forum=sqlintegrationservices&prof=required).
We should encourage Microsoft (through Connect) to just accept the fact that the developer knows best and if we specify a certain data type for a column to accept that’s what it is.
@jcridge: it will rescan for every file it opens. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00659.warc.gz | CC-MAIN-2023-14 | 2,103 | 17 |
http://www.microsoftpdc.com/2009/FT09 | code | You are currently watching the WMV (640x360) video. click to watch the High Quality WMV (960x540, not smooth streaming) video
Scrum for Team System v3 significantly evolves the leading Scrum process template by leveraging the capabilities of Visual Studio Team System 2010 Team Foundation Server (TFS 2010) to enhance the support for Agile best practices. Hear how a large customer extended its process model, supports its enterprise scale Scrum projects and Acceptance Driven Development. Additionally, learn how the template takes advantage of the new hierarchical work item capabilities, integrates with Microsoft Test and Lab Manager and supports the new deployment topologies for TFS 2010. | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699924051/warc/CC-MAIN-20130516102524-00020-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 694 | 2 |
https://2wagyu4ever.ca/sample-page/ | code | In May 2022 I built a node for the Avalon Testnet in collaboration with Techcoderx, one month before the Space Odyssey release.
A few weeks after the 6th Avalon’s hard fork release, I managed to have my Avalon node on the Mainnet. Despite I am not an elected leader yet, I am very proud to run my node. Unfortunately, I am not a dev and it is very hard for me to develop new things for D.Tube. But, I am very active on the platform and I am always trying to find new ideas and feeding the other devs with the bugs I found.
You maybe want to ask me the question, if I am not a dev, what I can do as an elected leader or why you should vote for me?
1. Since September 2022 I am on the richelist to have plenty of VP for voting.
2. My primary mission is to vote on every single DtubeGo Moments videos.
3. I am voting on my leader-voters video’s if they are original Dtuber and the video is not controversy
4. As soon as I am an elected leader I want to make contest to help new activitys on D.Tube.
5. I am also providing liquidity to D.Tube via the Farm
Please note that I am not using a third party like YouTube for my videos. Peer to Peer is my way to go on D.Tube and this is aligned with D.Tube primary philosophy.
Also, feel free to contact me at any time on the D.Tube Discord. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506481.17/warc/CC-MAIN-20230923130827-20230923160827-00143.warc.gz | CC-MAIN-2023-40 | 1,285 | 10 |
https://community.canvaslms.com/thread/18578-course-does-not-have-add-module-on-home-page | code | On the homepage to just one of my courses, I am missing the option to add modules or anything else. The only information on the page is a reference to recent activity or messages. All my other courses have the option for "+module," "Add new module," etc. I've spent a lot of time looking through my options to see if it was something I could fix, but I'm at a loss.
I'd like to set be able to set up this course the same as my others. How can I regain this functionality?# | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578806528.96/warc/CC-MAIN-20190426133444-20190426155444-00407.warc.gz | CC-MAIN-2019-18 | 472 | 2 |
http://www.coderanch.com/t/512484/Android/Mobile/Garbage-collection | code | The common problems I have found for android is memory exception. Being a mobile application developer we have to be more conscious about memory usage for running apps. Is the Garbage collection based on Activity life cycle?
Is the Garbage collection based on Activity life cycle?
I don't think Garbage collection is tied with Activity life cycle in particular. The collector itself(or its optimizing engine) is expected to determine the best time to perform a collection. It is based on the allocation being made; eventually GC will run.
For example, before getting an OutOfMemoryError, collection is guaranteed to have been performed once.
Instead of focusing on the times when the collection is done, try to keep Memory leaks in check and avoid creating objects where ever possible. | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398459875.44/warc/CC-MAIN-20151124205419-00050-ip-10-71-132-137.ec2.internal.warc.gz | CC-MAIN-2015-48 | 785 | 5 |
http://forum.xda-developers.com/showthread.php?p=25257017 | code | [Q] Bravia Engine Tweak
I'm running on CM7 bro edowar
I need implement bravia engine tweak,but I dont know which matching with our device,,I tried to search google but no result,,just other device bravia engine tweak
Can anyone help me or give me Bravia Engine file to implementation?
Sent from my CSL-MI410 using Tapatalk 2 | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132007.18/warc/CC-MAIN-20140914011212-00306-ip-10-196-40-205.us-west-1.compute.internal.warc.gz | CC-MAIN-2014-41 | 324 | 5 |
https://docs.microsoft.com/en-us/archive/blogs/david_gristwood/things-to-do-in-birmingham-on-a-wet-saturday-37-code-windows-azure-applications | code | Things to do in Birmingham on a Wet Saturday #37 – Code Windows Azure Applications
So its a Saturday and its wet outside – what else would you want to do, but write some Azure code. That what was happening yesterday at the Azure Open Space Coding Day. The event was sold out and we spent the day writing code. Many attending were new to Azure, so much of the morning was spent getting folk up and going, but by the afternoon people were syncing data between SQL Server and SQL Azure, playing with Windows Azure queues and tables, testing out ideas for WCF Azure services and even looking at PHP On Azure.
|Eric Nelson points out the emergency exits|
|Everyone hard at work programming|
|Eric attempts to bribe an attendee with a book on cloud computing| | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738425.43/warc/CC-MAIN-20200809043422-20200809073422-00183.warc.gz | CC-MAIN-2020-34 | 757 | 5 |
https://bestofama.com/users/AutoRedditPython | code | AutoRedditPython696 karma2019-02-28 19:06:21 UTC
Hello hallwaypizzaguy, I hope you have a wonderful day!
View HistoryShare Link
AutoRedditPython9 karma2019-02-28 19:39:30 UTC
Hello yesididthat, I hope you have a wonderful day!
Copyright © 2014 BestofAMA.com, All rights reserved.
reddit has not approved or endorsed BestofAMA, reddit design elements are trademarks of reddit inc. | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347458095.68/warc/CC-MAIN-20200604192256-20200604222256-00538.warc.gz | CC-MAIN-2020-24 | 380 | 7 |
https://help.intelligencebank.com/hc/en-us/categories/200081400-FAQs-Technical-Help | code | FAQs & Technical Help
This Frequently Asked Questions page ('FAQs') lists common questions and trouble-shooting tips to help navigate your IntelligenceBank platform, as well as Technical assistance such as queries regarding Servers, APIs, Browsers supported, etc.
- WebSockets - How to resolve access issues?
- IT Platform URL Allow-List Requirements
- Nightly Back-Ups of data
- AWS Ingestion Process for Physical Data Migration
- IE 11 - Clearing Cache and Cookies
- Setting up a Custom URL for your IntelligenceBank platform
Frequently Asked Questions
- What is my API v2 URL?
- Browser Support
- Resources - What is the best way to order my files and folders?
- Resources - What is the difference between using Keywords and Filters?
- Overall: What exactly is customizable on the IntelligenceBank platform?
- Resources - How Can I See a Record of All My Files At Once? | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646144.69/warc/CC-MAIN-20230530194919-20230530224919-00123.warc.gz | CC-MAIN-2023-23 | 872 | 15 |
https://www.tr.freelancer.com/projects/electronics/delta-plc-programmer/?ngsw-bypass=&w=f | code | standard cut to length with batch counter. using a heated wire with a temperature inter lock. length to be measured by encoder.
Bu iş için 13 freelancer ortalamada $20/saat teklif veriyor
Hello I am experienced electronics engineer with more than 14 years of experience in product development . I have designed a number of products starting from scratch and mass produce able designs including circuit des Daha Fazla
Hello, I can start work on it immediately. I have rich experiences in delta plc and encoder. I'm very much confident to reach your expectation and give you 100% satisfaction and quality work. Hope you will give me this Daha Fazla
I am an automation/controls engineer with more than 10 years of experience. I guess you want to use an incremental encoder without fast speed input, which is a bit tricky.
hi, i'll need more data to apply properly on your project, if you have a video on youtube of what you want will be easier, maybe a DVP SV will works for you, regards | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072180.33/warc/CC-MAIN-20210413092418-20210413122418-00573.warc.gz | CC-MAIN-2021-17 | 986 | 6 |
https://www.oxfordsbsguy.com/2015/12/20/exchange-2013-cumulative-update-installation-tips-best-practices/ | code | Edit: The latest update is now Exchange 2013 Cumulative Update 18. See here for a list of all updates and KB articles.
I’ve got a few Exchange 2013 installations in production now so I thought it was about time I wrote a post on best practices when installing Exchange 2013 Cumulative Updates and Service Packs.
Each Cumulative update is a version of Exchange in it own right. Therefore if you are installing a new deployment of Exchange you can install straight from latest Cumulative Update, which at the time of writing is CU11 (released December 2015). KB3099522 contains a list of fixes and enhancement for CU11. Check here to see what the current latest version is.
This also means that once installed you cannot uninstall a Cumulative Update, if you do you uninstall Exchange.
The update is approximately 1.6GB in size and can be downloaded from here.
In my test lab with a single Exchange server and single Domain Controller the update took an 60 minutes to install.
The official Exchange Team blog post can be found here.
Microsoft Support Policy
Microsoft will support the last two Cumulative Updates. So currently they support CU10 and CU11. They will support the oldest Cumulative Update for 3 months after the release of the latest Cumulative Update. Which makes sense as the Cumulative Updates are based on quarterly releases.
Exchange 2013 Cumulative Update best practices
Most of my deployments are in small to medium sized businesses and are usually single Exchange Server environments, so these tips are aimed at them, however most if not all are applicable to larger environments too.
- Test the update in a non-production environment first before deploying to a production environment.
- Consider waiting a week or two after the release date before deploying in production if you don’t have a test environment, in case there are any QA issues with the CU.
- Reboot the server so that it is in a known good state.
- Make sure you have a known good backup of Active Directory.
- Make sure you have a known good backup of your Exchange Server.
- Backup any customisations (OWA), as each Cumulative Update is basically an inplace upgrade customisations will not be retained.
- Run the Cumulative Update from an evelated command prompt.
- An Active Directory Schema modification will often be required so make sure the account you are using has the ability to do this.
- If you are upgrading a DAG member place it into maintenance mode first.
- In Internet Explorer, deselect “Check for Publisher’s certificate” and “Check for server certificate revocation”, from Internet Options, Advanced tab, Security options.
- Disable antivirus software – this was a tip for installing update rollups on Exchange 2010, I’ve not seen any references to this and Exchange 2013 though.
- Disable Backup Exec services – does anyone use it anymore? Another tip from installing update rollups for Exchange 2010, but again I’ve not seen any references to this and Exchange 2013 though.
- Once the update has completed reboot your server.
- Once rebooted test the server is functioning correctly. Use the cmdlets Test-ServiceHealth to confirm the services are running, and Test-MapiConnectivity to confirm access to mailbox databases. Check the ECP and Outlook WebApp.
Don’t forget to undo the changes made in steps 10, 11 and 12.
Personally I like to update Active Directory from the command prompt and then run the update in the GUI. You can run it all from the command prompt or all in the GUI, it makes no difference, but I like to see AD Schema updates as they happen.
Obviously prior to this point you’ve followed the steps above to prepare for the Cumulative Update.
- Run Exchange2013-x64-cu11.exe from an elevated command prompt.
- Extract to C:\Sw\Exch2013CU11 and then go to that directory in the command prompt.
- Run setup.exe /? for the help commands, take a look at setup.exe /help:upgrade and setup.exe /help:preparetopology
- First prepare the AD Schema. Run the command setup.exe /PrepareSchema /IacceptExchangeServerLicenseTerms
- Next prepare Active Directory. Run the command setup.exe /PrepareAD /IacceptExchangeServerLicenseTerms
- Then prepare the domain. Run the command setup.exe /PrepareDomain /IacceptExchangeServerLicenseTerms. In a multi domain environment you can use /PrepareAllDomains or specify domains individually.
- Now run Setup.exe for the gui installation or setup /m:upgrade /IacceptExchangeServerLicenseTerms for the unattended installation. In which case skip to 17)
- Check online for updates, click next.
- Click next to download any available updates.
- The setup will start to copy files.
- Then it will start to initialize.
- Setup will detect this is an upgrade, click next.
- Accept the License Agreement, click Next.
- Wait for the Readiness Checks to complete and click install.
- Setup progress.
- Setup complete.
- Command line installation complete.
- Reboot your server. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00011.warc.gz | CC-MAIN-2022-27 | 4,947 | 46 |
https://www.myonlinetraininghub.com/excel-forum/power-pivot/how-to-create-a-pivot-table-from-multiple-pivot-tables | code | June 19, 2018
I want to merge different pivot table into one pivot table to draw YOY comparison with actual and foretasted and last year. how can i do it.
these pivot table are from different sources an i want to merge them to use with one slicer and want calculate YoY, WOW format is Below. Please help me how can i do it. file is attached please if you can solve in that file will be a massive help.
December 7, 2016
No file is attached. Anyway, my approach would be to get access to all the source data, depending of the quality of the source data, work with any needed modifications using Power Query and from there build the master Pivot Table.
If it is not possible to get hold of the source data, then you should unpivot the data using Power Query again, do needed modifications and then build the master Pivot Table.
How to unpivot is described in this blog post.
July 16, 2010
Try this tutorial: https://www.myonlinetraininghu.....ower-query
The Power Pivot solution requires much more explanation and with no sample file we don't really know what you're working with. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474595.59/warc/CC-MAIN-20240225103506-20240225133506-00112.warc.gz | CC-MAIN-2024-10 | 1,077 | 10 |
https://web.ma.utexas.edu/mp_arc-bin/mpa?yn=01-324 | code | - 01-324 A. Bouzouina
- Stability of the 2D Brown-Ravenhall operator
Sep 10, 01
(auto. generated ps),
of related papers
Abstract. We prove that the two-dimensional Brown-Ravenhall operator is bounded
from below when the coupling constant is below a specified critical value --- property also referred to as stability.
As a consequence, the operator is then self-adjoint. The proof is based on the strategy followed by Lieb and Yau [LY] and by Evans, Perry and Siedentop [EPS] with some relevant changes characteristic of the dimension. Our analysis also yields a sharp Kato inequality. | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219221.53/warc/CC-MAIN-20200924132241-20200924162241-00612.warc.gz | CC-MAIN-2020-40 | 585 | 8 |
https://www.mail-archive.com/[email protected]/msg41610.html | code | On Sun, Jan 05, 2014 at 08:48:50PM +0100, Heiko Voigt wrote: > On Sun, Jan 05, 2014 at 08:17:00AM -0800, W. Trevor King wrote: > > It's not clear if this refers to the initial-clone update, future > > post-clone updates, or both. Ideally, the behavior should be the > > same, but in the initial-clone case we don't have an existing > > checked-out branch to work with. > > I do not think that its actually important to end up with a detached > HEAD. The documentation just states it because in most cases there > is no other option. But I do not think anything will break if a > branch points to the exact sha1 we would checkout and we checkout > the branch instead.
There's no "if the remote-tracking branch points to the exact sha1" logic in my patch. If submodule.<name>.branch is set, it *always* creates a new local branch of that name pointing to the exact sha1. If submodule.<name>.branch is not set, we still create a detached-HEAD checkout of the exact sha1. Thinking through this more, perhaps the logic should be: * If submodule.<name>.update (defaulting to checkout) is checkout, create a detached HEAD. * Otherwise, create a new branch submodule.<name>.branch (defaulting to master). The motivation is that if submodule.<name>.update is checkout, the user is unlikely to be developing locally in the submodule, as subsequent updates would clobber their local commits. Having a detached HEAD is a helpful "don't develop here" reminder ;). If submodule.<name>.update is set, the user is likely to be developing locally, and will probably want a local branch already checked out to facilitate that. > > - module_clone "$sm_path" "$name" "$url" "$reference" > > "$depth" || exit > > + module_clone "$sm_path" "$name" "$url" "$reference" > > "$depth" "$config_branch" || exit > > In the simple case (update=checkout, no branch specified) with the > new checkout branch stuff in module_clone() this code here ends up > calling checkout twice. First for master and then here later with > the sha1. This feels a little bit double. There is no guarantee that the remote master and the exact sha1 point at the same commit. Ideally we'd just clone the exact sha1 in this case. > I would prefer if we skip the checkout in module_clone() if its not > necessary. When I tried to drop the '' case here: > > @@ -306,7 +307,15 @@ module_clone() > > echo "gitdir: $rel/$a" >"$sm_path/.git" > > > > rel=$(echo $a | sed -e 's|[^/][^/]*|..|g') > > - (clear_local_git_env; cd "$sm_path" && GIT_WORK_TREE=. git config > > core.worktree "$rel/$b") > > + ( > > + clear_local_git_env > > + cd "$sm_path" && > > + GIT_WORK_TREE=. git config core.worktree "$rel/$b" && > > + case "$branch" in > > + '') git checkout -f -q ;; > > + ?*) git checkout -f -q -B "$branch" "origin/$branch" ;; > > + esac > > + ) || die "$(eval_gettext "Unable to setup cloned submodule > > '\$sm_path'")" > > } I got test-suite errors that I didn't get to the bottom of. However… > How about we move the whole "what to checkout"-decision into one place > instead of having it in update() and moving it from add() into > module_clone() ? …this sounds like a good idea to me. However, it would be a more intrusive change, and there may be conflicts with Francesco's proposed attach/detach functionality. I'll wait until we have a clearer idea of where that is headed before I attempt a more complete consolidation. > > - update_module= ;; > > + if test -n "$config_branch"; then > > + update_module="!git reset --hard -q" > > If we get here the checkout has already been done. Shouldn't this > rather specify a noop. I.E. like > > update_module="!true" We are on a local branch at this point, but not neccessarily pointing at the gitlinked sha1. The reset here ensures that the new local branch does indeed point at the gitlinked sha1. Cheers, Trevor -- This email may be signed or encrypted with GnuPG (http://www.gnupg.org). For more information, see http://en.wikipedia.org/wiki/Pretty_Good_Privacy
Description: OpenPGP digital signature | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888113.39/warc/CC-MAIN-20180119184632-20180119204632-00703.warc.gz | CC-MAIN-2018-05 | 4,004 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.