id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.350103
I have 150 Debian Jessie machines that open ODS files in Gnumeric when double-clicked despite LibreOffice Calc being installed. I know it is possible to change this by right-clicking the ODS file and changing its default program from the Properties window, but getting 150 users to do this is not an option. They all use xfce4 and thunar.I need to do this via CLI so I can do it across all workstations remotely. I have looked in /usr/share/applications and ~/.local/share/application/mimetypes.list with no luck - comparing the files before and after changing it via GUI revealed no changes here.How can I use bash to make these workstations open ODS files with LibreOffice Calc by default?EDIT: Unlike the answers to this question, my Jessie installs do not have ~/.config/mimeapps.list or /usr/share/applications/defaults.list
Setting default application for filetypes via CLI?
debian;libreoffice;mime types;defaults;file types
You can use mimeopen with -d option:man mimeopen :DESCRIPTION This script tries to determine the mimetype of a file and open it with the default desktop application. If no default application is configured the user is prompted with an open with menu in the terminal.-d, --ask-default Let the user choose a new default program for given files.Example:mimeopen -d file.mp4sample output:Please choose a default application for files of type video/mp4 1) VLC media player (vlc) 2) Other...Verify it:xdg-open file.mp4
_codereview.51241
My current code: val headers : Map[String, Seq[String]] = ...//NOTE: (key -> Seq()) is valid entryheaders.toList flatMap {x => x._2 map { y => (x._1 -> y) }}This feels clumsy...is there a better way to do this?GoalI have a Map[String, Seq[String]] representing all the headers in an HTTP request. That needs to be translated to a Seq[(String, String)] in order to be passed to a method that expects that headers to be stored in that format.The code is supposed to convert to this format. The pair something -> Seq(val1, val2, val3) would be translated to 3 (String,String)'s. something -> Seq() would be translated to 0. ConcernI'm still getting the hang of some aspects of the functional approach. In a lot of cases, this one included, it feels like I'm building hard-to read, overly complex code. I get that it's just a tolist + two maps, but this seems way less clear than some of the imperative alternatives, for example:val ret = mutable.Seq[(String,String]()for (kv <- headers; value <- kv._2) ret :+ (kv._1, value)or even:Set[Tuple2[String,String]] ret = new HashSet[Tuple2[String,String]]()for (key : headers.keySet) { for (value : headers.get(key) { ret.add(new Tuple2(key, value)) } }
Converting A Map[String, Seq[String]] To A Seq[(String, String)] in Scala
functional programming;scala
I don't see how the functional code looks clumsy or difficult to read. If you have problems reading functional code, than don't write it - the imperative code is entirely fine, as long as the side effects remain local to a single function.A lot of readability can be gained by giving your intermediate values useful names and to apply a different formatting:headers.toSeq flatMap { case (parameter, values) => values map (parameter -> _)}No need to squash everything into a single line or to use tuples ugly _X accessors.Another way to write your code is:for ((parameter, values) <- m.toSeq; v <- values) yield parameter -> vwhich looks very similar to the imperative solution but is exactly the same as the code before.Write code that belongs to your skill level. If you don't feel fine with shorter and more functional code, then let it longer and more imperative. Improve your skills during the next months and refactor the code once you come back and see the need for it.
_codereview.110257
I am currently building a small application that allows users to create a slideshow, each slideshow consisting of slides that are either videos or images. Although this will be relatively small at this point, I want to build it using best practices and keeping in mind that this project will no doubt grow.Before I get too far setting up my classes, I want to make sure my architecture is sound, as I am relatively inexperienced with objected oriented PHP but understand the importance of setting things up right in the first place.Firstly - my PDO connection and class autoloading takes place in my config file, whilst also defining a few constants, one for development mode, one for ensuring direct access to included files is prevented. So, config.php...<?phpdefine('DEVELOPMENT_MODE', true);ini_set('error_reporting', E_ALL);ini_set('display_errors', 1);/** Used to prevent direct access to included files */define('DIRECT_ACCESS', true);/** Define DB Constants */define('DB_HOST', 'localhost');define('DB_NAME', 'dbname');define('DB_USER', 'dbuser');define('DB_PASS', 'dbpass');/** * Connects to Database * * @return object PDO Instance */ function dbh_connect(){ try { $dbh = new PDO('mysql:host=' . DB_HOST . ';dbname=' . DB_NAME, DB_USER, DB_PASS); $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch (PDOException $e) { if (defined('DEVELOPMENT_MODE')) { echo $e->getMessage(); exit; } else { echo 'Sorry, something seems to have gone wrong, we are aware of the problem and are looking into it. Please come back later'; mail('myemail', 'Error Connecting to dbname Database', 'Error Connecting to dbname Database'); exit; } } return $dbh;}spl_autoload_register(function($class){ include 'mydirectory/classes/' . strtolower($class) . '.class.php';});I then have my first class, Slide class. I pass the db instance via dependency injection and set the private $dbh property to a PDO instance in my constructor.I have also created a save method. It works, so no problems there. However, I am very conscious that classes should have one responsibility, and am wondering if this method should be somewhere else, in a different class. As you can see, the method could be copied and pasted into another class and work out of the box which is 1) good because it is portable, but 2) bad because this would be repeating myself.<?phpclass Slide{ private $id = null; private $ordernum = null; private $slide_type = null; private $location_id = null; private $status = null; private $dbh = null; public function __construct($dbh) { $this->dbh = $dbh; } public function __get($property) { if (property_exists($this, $property)) { return $this->$property; } } public function __set($property, $value) { if (property_exists($this, $property)) { $this->$property = $value; } } public function save() { try { $this->dbh->beginTransaction(); $properties = array_filter(get_object_vars($this)); unset($properties['dbh']); unset($properties['id']); $querystring = ''; foreach ($properties as $prop => $value) { $querystring .= {$prop} = :{$prop},; } $querystring = substr($querystring, 0, -1); $stmt = $this->dbh->prepare(UPDATE slides SET . $querystring . WHERE id = :id); foreach ($properties as $prop => $value) { $stmt->bindValue(:{$prop}, $this->$prop); } $stmt->bindValue(':id', $this->id); $stmt->execute(); $this->dbh->commit(); return $stmt->rowCount(); // if row count is 0 need to return error, trigger error, exeption??? } catch (PDOException $e) { $this->dbh->rollBack(); if (defined('DEVELOPMENT_MODE')) { echo $e->getMessage(); exit; } else { echo 'Sorry, something seems to have gone wrong, we are aware of the problem and are looking into it. Please come back later'; mail('myemail', 'Error Connecting to dbname Database', 'Error Connecting to dbname Database'); exit; } } }}Is a save method like this acceptable to have in a specific class like the Slide class? If not and it should be in some sort of DAO, how would that work exactly?A method that looks up a slide in the database and then maps the columns to object properties via PDO::FETCH_CLASS, would this also reside in this slide class?Note: I understand that the __get and __set magic methods are slower than custom getters and setters, but I do like how compact the code is. Is there a risk when running PDO::FETCH_CLASS that these methods would cause problems if my column names do not match the properties of the class?
Classes, one responsibility principle and magic methods
php;object oriented;mysql;php5;pdo
The Slide class is mixing data access code and your Domain Model. These should be separated somehow, either by creating a parent class to encapsulate the data access or utilize the Repository Pattern to separate them.Configuration is always messy. This is the one area where a static class with static methods makes sense. I would rewrite config.php into its own class, and remove any reference to PDO:ini_set('error_reporting', E_ALL);ini_set('display_errors', 1);// Used to prevent direct access to included filesdefine('DIRECT_ACCESS', true);spl_autoload_register(function($class){ include 'mydirectory/classes/' . strtolower($class) . '.class.php';});class Config{ private $data; private static function getData() { if (!isset($this->data)) { $this->data = parse_ini_file(__FILE__ . '../config.ini'); } return $this->data; } public static function getDbUsername() { return $this->getData()['database']['username']; } public static function getDbPassword() { return $this->getData()['database']['password']; } public static function getDataSourceName() { return $this->getData()['database']['dsn']; }}Create an ini file to encapsulate your application config:environment = development[database]dsn = 'mysql:host=localhost; dbname=mydbusername = ...password = ...This makes it easy to have multiple application environments for the same code base. The Slide constructor takes an argument called $dbh. You can utilize PHP Type Declarations to communicate that an instance of PDO is required.class Slide{ // ... public function __construct(PDO $dbh) { $this->dbh = $dbh; }}This helps reduce programming errors by communicating exactly what this class requiresRequiring a database connection object in a Domain Model is not correct. Either refactor this into a parent class or create another class responsible for the database CRUD operations on Slide objects.For instance, you could create a parent class for all your models:class Model{ private static $dbh; protected function beginTransaction() { $this->getConnection()->beginTransaction(); } protected function commit() { try { $this->getConnection()->commit(); } catch (PDOException $e) { $this->rollBack(); throw $e; } } protected function executeNonQuery(PDOStatement $statement) { $this->beginTransaction(); $statement->execute(); } private function getConnection() { if (!isset(Model::$dbh)) { Model::$dbh = new PDO(Config.getDataSourceName(), Config.getDbUsername(), Config.getDbPassword()); Model::$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } return Model::$dbh; } protected function prepareStatement($sql) { return $this->getConnection()->prepare($sql); } protected function rollBack() { $this->getConnection()->rollBack(); }}The PDO connection object should be encapsulated in the parent class so sub classes must use protected methods on the parent class to run SQL against the database. This will reduce the chances for introducing bugs, and increase the maintainability of your applicationThe database connection object can be a static field on the class to promote sharing this object between model classesAll database transaction functionality should be in protected methodsThe execution of PDOStatement's should be in a protected method.No exceptions are swallowed! Please, please, please, please, please, please, please, please, please never ever ever ever ever ever catch exceptions and echo them or swallow them. If a database operation fails, let the exception blow things up sky-high. A higher layer of your application should handle the exception, log it, and redirect the user to some generic Server Error page.Finally, your Slide class can extend from Model:class Slide extends Model{ public function save() { $properties = array_filter(get_object_vars($this)); unset($properties['dbh']); unset($properties['id']); $querystring = ''; $params = array(); foreach ($properties as $prop => $value) { $querystring .= {$prop} = :{$prop},; } $querystring = substr($querystring, 0, -1); $statement = $this->prepareStatement(UPDATE slides SET . $querystring . WHERE id = :id); foreach ($properties as $prop => $value) { $statement->bindValue(':' . $prop, $this->$prop); } $statement->bindValue(':id', $this->id); $this->executeNonQuery($statement); $this->commit(); }}If mixing your Domain Model with Data Access is not to your liking, then the Repository Pattern can help.Decoupling Data Access From the Domain ModelThe Repository Pattern allows you to completely decouple your Domain Model from the Data Access code. Instead of a class called Model we can refactor this into a common parent class for all your repository classes:class BaseRepository{ private static $dbh; protected function beginTransaction() { $this->getConnection()->beginTransaction(); } public function commit() { try { $this->getConnection()->commit(); } catch (PDOException $e) { $this->rollBack(); throw $e; } } protected function executeNonQuery(PDOStatement $statement) { $this->beginTransaction(); $statement->execute(); } private function getConnection() { if (!isset(BaseRepository::$dbh)) { BaseRepository::$dbh = new PDO(Config.getDataSourceName(), Config.getDbUsername(), Config.getDbPassword()); BaseRepository::$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } return BaseRepository::$dbh; } protected function prepareStatement($sql) { return $this->getConnection()->prepare($sql); } public function rollBack() { $this->getConnection()->rollBack(); }}Next, you'll want to create a SlideRepository class:class SlideRepository extends BaseRepository{ private const TABLE_NAME = 'slides'; public function find($id) { $statement = $this->prepareStatement(SELECT * FROM {SlideRepository::TABLE_NAME} WHERE id = :id); $statement->bindValue(':id', $id); $data = $statement->fetchAll(); if (empty($data)) { return null; } return $this->map($data[0]); } public function save(Slide $slide) { $properties = array_filter(get_object_vars($slide)); unset($properties['dbh']); unset($properties['id']); $querystring = ''; $params = array(); foreach ($properties as $prop => $value) { $querystring .= {$prop} = :{$prop},; } $querystring = substr($querystring, 0, -1); $statement = $this->prepareStatement(UPDATE {SlideRepository::TABLE_NAME} SET . $querystring . WHERE id = :id); foreach ($properties as $prop => $value) { $statement->bindValue(':' . $prop, $slide->$prop); } $statement->bindValue(':id', $slide->id); $this->executeNonQuery($statement); } private function map($row) { $slide = new Slide($row['id']); $slide->ordernum = $row['ordernum']; // Continue mapping columns to properties return $slide; }}This will remove all data access code from your Slide class, leaving it as a pure Domain Model that bundles data (in private fields) with behavior (in public methods).Sample usage:$repository = new SlideRepository();$slide = $repository->find(23);$slide->ordernum = 1;$repository->save($slide);$repository->commit();A side effect of the Repository Pattern and exposing commit and rollBack methods publicly is you also implement the Unit of Work Pattern.
_unix.329913
I do want to use socat for directing serial commands over ethernet to a ethernet-serial converter (static ip adress). I was wondering which would a good way of starting socat.If I understand everything correctly systemd would allow me to make sure socat is alway running or in case of failure, tries to restart. The .service file would look like:[Service]Type=simpleRestart=alwaysRestartSec=5[Unit] Description=my socat testUser=meGroup=meExecStart=/bin/bash -c '~/my_socat.sh'[Install]WantedBy=multi-user.targetThe script would look like#!/bin/bashsocat PTY,link=/home/me/dev/valve1 TCP:192.168.11.101:5001 & socat PTY,link=/home/me/dev/valve2 TCP:192.168.11.101:5002Would this approach do what I want ? Would socat be restarted if it dies for some reason ? And what would happen if the ethernet connection is not available when socat is started ? Running it from the shell without network connection does not work and the command fails with the error message network is unreachable.How would you make sure socat is running before my (python) script is executed ? Would you start socat from within python ?
socat: calling from script, bashrc or systemd?
shell script;systemd;serial port;socat
null
_unix.98639
I am afraid that my GRUB is corrupted (I do a lot of experiments on my PC) and I know that I will not be able to reinstall Debian (downloading takes up a lot of time). Also, I am just 13 so I don't know much about it. I want to make a GRUB rescue USB (CD drive doesn't work). Could anyone tell me how to do it?
How to make a GRUB Rescue Disc?
debian;boot;grub2;rescue
I am not sure I understand what you wish, so I will answer different, separate questions. Are you asking how to repair your system, should your boot sector get corrupted? If so, the ideal utility for you is Boot Repair, a convenient Ubuntu-based utility. All you need to do is put on a USB stick a Ubuntu image (the one that you download in order to install Ubuntu), boot from the stick, choose at the appropriate screen Try Ubuntu without installing it, then follow the instructions on the Web page I referenced above to download Boot-Repair, then run Boot-Repair using the standard instructions which, in my experience, are normally sufficient to solve most common problems. Please notice that the Ubuntu stick does not keep the downloaded package Boot-Repair (nor any other package, for this matter), so, if you run into the same problem again you will have to retrace the same steps.Should this prove insufficient, you may go to this Web page of www.distrowatch.com which lists all distros useful for a system rescue. There are ten to choose from, most people find SystemRescueCd especially helpful, but there are other options. Lastly, there is Remastersys, which is a fantastic utility, which can, according to Wikipedia:Remastersys is a free and open source program for Debian, Ubuntu-based, or derivative software systems that can: Create a customized Live CD/DVD (a remaster) of Debian and its derivatives. Back up an entire system, including user data, to an installable Live CD/DVD.You should not worry about the Live CD/DVD bit, because it is possible to transform a Live CD/DVD image into a bootable stick, easily, see here.Now a word about Remastersys. It is in a transition phase because its creator, Fragadelic, has abandoned its development, and has picked it up again inside a different project. However, for the time being, you can still find the executables for Debian here, while details about the state of advancement of the new project can be found here.Remember that, when you decide which of these solutions is right for you, you can post in this same forum (but different question) all requests for further clarification you may have.
_webmaster.15324
I just created a simple webpage where people can browse funny photos and share it with their friends.But I'm having problems with the Facebook like button. I have used the Facebook like button before on a static URL with iframe. And that seemed to work fine.I'mm using the XFBML version and not the iframe version because the iframe version affects the pages layout.This is the code I got from Facebook:<div id=fb-root></div><script src=http://connect.facebook.net/en_US/all.js#xfbml=1></script><fb:like href= send=false layout=button_count width=450 show_faces=false font=tahoma></fb:like>Facebook says that href - the URL to like. The XFBML version defaults to the current page. So i left that one open.Problem:When users click Like the counter won't update. It still shows nothing as if no one ever clicked the button.
facebook like button count
facebook
null
_unix.154168
I get error while installing python3 in Debian 7 (I have all updates installed):root@nuclight:~# aptitude install python3The following NEW packages will be installed: python3 python3.2{a} 0 packages upgraded, 2 newly installed, 0 to remove and 0 not upgraded.Need to get 0 B/2,621 kB of archives. After unpacking 8,991 kB will be used.Do you want to continue? [Y/n/?] Selecting previously unselected package python3.2.dpkg: warning: files list file for package 'python3.2-minimal' missing; assuming package has no files currently installed(Reading database ... 101541 files and directories currently installed.)Unpacking python3.2 (from .../python3.2_3.2.3-7_amd64.deb) ...Selecting previously unselected package python3.Unpacking python3 (from .../python3_3.2.3-6_all.deb) ...Processing triggers for desktop-file-utils ...Processing triggers for man-db ...Setting up python3.2 (3.2.3-7) .../var/lib/dpkg/info/python3.2.postinst: 6: /var/lib/dpkg/info/python3.2.postinst: python3.2: not founddpkg: error processing python3.2 (--configure): subprocess installed post-installation script returned error exit status 127dpkg: dependency problems prevent configuration of python3: python3 depends on python3.2 (>= 3.2.3); however: Package python3.2 is not configured yet.dpkg: error processing python3 (--configure): dependency problems - leaving unconfiguredErrors were encountered while processing: python3.2 python3E: Sub-process /usr/bin/dpkg returned an error code (1)A package failed to install. Trying to recover:Setting up python3.2 (3.2.3-7) .../var/lib/dpkg/info/python3.2.postinst: 6: /var/lib/dpkg/info/python3.2.postinst: python3.2: not founddpkg: error processing python3.2 (--configure): subprocess installed post-installation script returned error exit status 127dpkg: dependency problems prevent configuration of python3: python3 depends on python3.2 (>= 3.2.3); however: Package python3.2 is not configured yet.dpkg: error processing python3 (--configure): dependency problems - leaving unconfiguredErrors were encountered while processing: python3.2 python3Can you help me?
Can't install Python3 package in Debian 7
debian;package management;python;aptitude
The problem seems to be the following line:dpkg: warning: files list file for package 'python3.2-minimal' missing; assuming package has no files currently installedCan you please run the following:apt-get updateapt-get remove python3.*apt-get install python3This will remove all Python 3 Packages and then install Python 3 again. It seems your current install is slightly broken, as the python3.2 binary which should be in the (already installed) python3.2-minimal package cannot be found.
_webmaster.19079
When I was signing up for a (shared) web hosting for my web site the company, BlueHost, provided a domain name with it. Now I want to switch to another web hosting company, because of the excessive throttling Bluehost does on my site. (Even though they advertise it as unlimited, what they don't tell you is that they impose a draconian CPU quota and a quota on a number of the simultaneous database/MySQL connections that makes my site issue errors in the most critical moments.)My question is, since I got the domain name for my site through them, how can I reclaim it when I switch to a new hosting company?
How to reclaim ownership of a domain name from a shared web host?
web hosting;transfer
null
_cstheory.38647
Consider a full-rank lattice in $\mathbb{R}^n$. Let $\lambda_1$ be the length of the shortest nonzero vector. Given a vector in $\mathbb{R}^n$ we wish to find the nearest lattice vector, as measured by Euclidean distance. In general this is NP-hard. However, we can add the promise that distance to the nearest lattice vector is at most $d$. I'll refer to this as the bounded distance decoding problem. My understanding is that Babai's nearest plane algorithm provably solves this for $d \leq 2^{-c n} \lambda_1$ for some specific value of $c$ (which I don't know but which I believe to be known, and which I think maybe originates from an LLL preprocessing step). My question is whether there is a known way to efficiently go beyond this by a polynomial factor. For example, is there a known polynomial-time algorithm to solve bounded distance decoding at $d = n 2^{-cn} \lambda_1$? The closest to this that I have managed to find in the literature is an algorithm by Klein and subsequent extension by Liu, Ling, and Stehle, which appears to extend beyond the radius of Babai's algorithm by a factor of $k$ at a complexity cost of $n^{k^2}$.
Bounded distance decoding beyond Babai
polynomial time;lattice
null
_webmaster.39859
I would like to know what is the delay between page indexing and ranking for new sites?Assuming pages are visible with the site command, how much time before one sees them in search results? I have read there can be a significant delay for new websites. Did you observe this on your sites?I would like to know when should I start wondering whether there is an issue with one of my sites.Thanks!
Delay between indexing and ranking for new sites?
google index;google ranking
If your site is visible with the site: command then your site is already in the search results.If your site is indexed then it's available in the search results; there is no delay.However, there is a delay between crawling and indexing. And the fact that a page is crawled doesn't necessarily mean that it will be indexed.If your site is indexed and visible when you do a site: search but not for some arbitrary keyword search then this just means that your site is not yet ranking well for that particular search term. Other sites are ranking better.In order to now improve your ranking you need to do all the usual SEO/SEM stuff... and get people to link to you - this is the time consuming part. If it's a new site then you're obviously not going to have (m)any backlinks yet.
_webapps.30990
I would like to colour the background of a row in Google Spreadsheets:For example:If cell Dx = title 'To do' or 'title 'Done' then cells Fx to Nx are filled colour black.If cell Dx = title 'closed' the font colour of cell Ax to Nx turns green.I can do this in Excel quite easily...
How can I set conditional formatting on particular cells that depend on another cell's value in Google Spreadsheets?
google spreadsheets;conditional formatting
null
_webapps.60349
Is it possible to recover the history of the chat window available during Google Hangouts video call? This is separate from the history of the chat client and is not accessible with the same method I presume.
Google+ Hangout Video Call Chat History
google hangouts
null
_unix.140370
I'm trying to install the driver of the probe IPEH-00202242179, I downloaded an installation file with the following commands:$ cd peak-linux-driver-7.9$ make clean$ make $ su -c make install But the problem is when I do make I get the following error message:make [1]: [PCAN-settings] Error 1 * What should I do to solve this problem?
Compilation error on Ubuntu
software installation;compiling
null
_codereview.109360
The monkey-banana problem: There many variations to this problem, but the basic premise is that a monkey is in a room with a banana and a chair, and the monkey cannot reach the banana until he moves the chair close to the banana (so as to reach it), then climbs the chair and gets the banana.(Note : Often times, the chair is replaced with a box or a similar object that the monkey can climb on. Also, a stick may be added to the mix, which the monkey needs to swing at the banana in order to make it drop to the floor.)I came up with this program to solve a simple monkey-banana problem in AI (no stick). I think it's overly simple, but I would like to know what you think.on(floor,monkey).on(floor,chair).in(room,monkey).in(room,chair).in(room,banana).at(ceiling,banana).strong(monkey).grasp(monkey).climb(monkey,chair).push(monkey,chair):- strong(monkey).under(banana,chair):- push(monkey,chair).canreach(banana,monkey):- at(floor,banana); at(ceiling,banana), under(banana,chair), climb(monkey,chair).canget(banana,monkey):- canreach(banana,monkey),grasp(monkey).Should I add location attributes as well? I think that I have hard-coded a lot of things, or maybe having the stick included in the problem would make a better search tree?
Monkey-banana problem in Prolog
ai;prolog
null
_unix.182035
This is my situationOn both nodes cat /etc/drbd.d/*resource clustervol {device /dev/drbd1;disk /dev/sdb1;meta-disk internal;on iscsi1 {address 192.168.0.30:7790;}on iscsi2 {address 192.168.0.41:7790;}}global { usage-count yes; # minor-count dialog-refresh disable-ip-verification}common { protocol C; handlers { # The following 3 handlers were disabled due to #576511. # Please check the DRBD manual and enable them, if they make sense in your setup. # pri-on-incon-degr /usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f; # pri-lost-after-sb /usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f; # local-io-error /usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f; # fence-peer /usr/lib/drbd/crm-fence-peer.sh; # split-brain /usr/lib/drbd/notify-split-brain.sh root; # out-of-sync /usr/lib/drbd/notify-out-of-sync.sh root; # before-resync-target /usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;}startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb}disk { # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes # no-disk-drain no-md-flushes max-bio-bvecs}net { # sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork}syncer { # rate after al-extents use-rle cpu-mask verify-alg csums-alg}}On node1 works all version: 8.3.11 (api:88/proto:86-96)srcversion: F937DCB2E5D83C6CCE4A6C9 1: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown r----s ns:0 nr:0 dw:0 dr:996 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:29359164On node2 nocat /proc/drbd version: 8.3.11 (api:88/proto:86-96)srcversion: F937DCB2E5D83C6CCE4A6C9 1: cs:WFConnection ro:Secondary/Unknown ds:Diskless/DUnknown C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0I have tried a lot of command: disconnect,invalidate,discard my dataon node2,but the situation doesn't changeIs a virgin dbrd configuration,i have no data on diskhow to force a resync?Thanks
drbd can't start resync
drbd
Dirty and quick solution removed and recreate virtual disk(i'm on vm for testing)thenon primarydrbdadm create-md clustervoldrbdadm primary --force clustervoldrbdadm -- --overwrite-data-of-peer primary clustervolservice drbd restarton secondarydrbdadm secondary allservice drbd restart
_unix.310684
I'm trying to connect to myself through ssh. (For testing purpose). I'm on a Ubuntu laptop. I successfully launched a ssh serveur, and ssh localhost works fine. If I try ssh , it works too. (I got it from ifconfig)But if I try with my public IP adress (got it on whatismyipadress.com), it doesn't work, I get a timeout after 2 minutes. (Note that ping works fine)Any idea about why I get this ? I wouldn't be surprised if the problem came from my router (or is it a modem, or something like that ? I don't know), which would be working fine but have no idea about where to redirect ssh request (maybe on a computer from home which doesn't run ssh server ?)
Can't ssh to myself through internet?
ssh;openssh;timeout
As @Findus23 says, you have to forward a port (different from the 22, which is perhaps used as default port to connect to SSH router itself) from the router to your PC. I make an example: PC [192.168.1.10] <==> [192.168.1.1] Router [1.1.1.1] <==> Internet As you can see, connecting through SSH to the address 1.1.1.1 results in connecting to the port 22 of your router on his public interface (faced to Internet). If you want to access a PC located behind the router (so, in the private network of address space 192.168.1.0/24) you have to enable Port Forwarding.This is mandatory because your router must know which host it has to contact from the full address space of PCs connected to it. For example: (PC1[192.168.1.2], PC2[192.168.1.3]) <==> Router Router cannot know if it has to route to PC1 or PC2 when contacted to a specified port.Enabling Port Forwarding you can say:All connections from Internet to port X of the router, must be routed to port Y of PCZ So, in your case: Router:<port> => PC1:22 (route connection from router port 4000 to port 22 of PC1)ssh -p <port> user@<router-IP> results in connecting, from Internet, to the specified local host.
_softwareengineering.194196
When Aiken devised the Mark I, why did he decided to separate data and instructions? It was not mentioned in Wikipedia (or in any other searches I've looked) on how or why Aiken separated data and instructions.
Why did Aiken decided to separate data and instructions in the Harvard Mark I?
history;computer architecture
null
_unix.61626
I'm trying to write a command to test that data is written to a file. My first approach was:Start reading in the background.Write some data to the file.Wait for the reader to find a result.Repeat indefinitely to see if there's a timing issue.In script form:while truedo grep -q foo <(tail -n0 -f /var/log/syslog) & logger foo && logger line && waitdone(the logger line command is to avoid last message repeated N times lines in the file)This version will typically loop a once or twice before getting stuck at the wait command, so it looks like tail didn't have time to start reading before logger foo had written to the file.What is the best way to guarantee that tail is reading before continuing? These workarounds are not ideal:Pause before logger (won't work in the case of slow file systems) sleep 1 && logger foo && logger line && waitStart a second reader, and assume that the first one has started reading by the time the second has been shown to. This looped a few thousand times before getting stuck: grep -q foo <(tail -n0 -f /var/log/syslog) & grep -q bar <(tail -n0 -f /var/log/syslog) & while kill -0 $! do logger bar logger line done logger foo && logger line && wait
How to ensure a process has started reading a file before continuing?
bash;process;process management;background process
null
_codereview.68094
I need to convert some CSV data into a format recognizable by some old script my work uses. Changing the format is pretty much out of the question. So I've written this:$rate_plan_terms = [ pp, 01, 12, 24, 36]def generate(fname) puts generating doc for #{fname} file = File.open(fname) standard_offer_hash = {} features_hash = {} file.readlines[1..-1].each do |row| # standard_offers _region, offer_cid, _offer_code, offer_name, product_cid, product_name, component_name, component_cid = *row.split(,)#used? no yes no yes kinda yes yes yes standard_offer_hash[product_cid] ||= {} standard_offer_hash[product_cid][:offerType] = product_name # just keep it as product_cid? standard_offer_hash[product_cid][:ratePlan] ||= [] $rate_plan_terms.each do |term| rate_plan = { :ratePlanName => offer_name, :ratePlanCode => offer_cid, :ratePlanTerm => term, :offerEffectiveDate => , :ratePlanMRC => } standard_offer_hash[product_cid][:ratePlan] << rate_plan unless standard_offer_hash[product_cid][:ratePlan].include? rate_plan # they will be repeated because of features end # features features_hash[component_cid] ||= { :featureName => component_name, :featureCode => component_cid, :featureType => FEATURE, :featureTerm => , :featureEffectiveDate => , :featureMRC => } end feature_arr = features_hash.map {|k,f| f} standard_offer_arr = standard_offer_hash.map {|k,o| o} product_arr = [] pricelist = { :features => { :feature => feature_arr } :ChannelOrgType => OCM, :Region => ALL, :product => product_arr, :businessRelationship => , :standardOffer => standard_offer_arr, } {:PriceList => pricelist}endYeah, complicated. This is running on up to a million lines per file, usually two files, every day. So it needs to be faster than it currently is.FYI, the CSV is loosely separated into many products belong to many offers, and products have many features, but two different products can have the same features. So there is a lot of repeating.There seem to be two time sinks here. One is the Array#include? I'm running over and over again. I wanted to transform that into a build hash -> convert to array later approach I took with features_arr and standard_offers_arr to avoid doing searches. But it depends on the product_cid, which (could) change per row.The second is those two hashes to arrays at the end. The standard_offer_hash is going to be huge. That's can't be very fast.To be completely honest I just don't see a way to optimize this further. Any suggestions?I can show some output for what the generated structure needs to look like, if it would help.
Generating a complicated data structure in a more efficient manner
ruby;array;hash table
You open the file, but you don't explicitly close it. Bad karma.You read all lines into memory, it seems, rather than go through them one by one. Memory-wise that's pretty inefficient.Any reason for $rate_plan_terms to be global? That too seems like bad karma. A constant would make more sense, I think.Have you considered using Ruby's bundled CSV parser? It'll be more robust.It'd be easier to just use Hash#values rather than map { |k, v| v }As for the use of include?, you skip that by making :rate_plan a hash, and use a key like #{offer_name}#{offer_cid}. Simpler to explain in code, so see below.require csvdef generate(fname) puts generating doc for #{fname} terms = %w{pp 01 12 24 36}.freeze # just made this local here, a const would still be better offers = {} features = {} CSV.foreach(fname, headers: true, skip_blanks: true) do |row| # If the CSV file has a proper header line, you can also access cells by their name _region, offer_cid, _offer_code, offer_name, product_cid, product_name, component_name, component_cid = *row.fields offers[product_cid] ||= { offerType: product_name ratePlan: {} } key = #{offer_name}#{offer_cid} offers[product_cid][:ratePlan][key] ||= terms.map do |term| { ratePlanName: offer_name, ratePlanCode: offer_cid, ratePlanTerm: term, offerEffectiveDate: , ratePlanMRC: } end features[component_cid] ||= { featureName: component_name, featureCode: component_cid, featureType: FEATURE, featureTerm: , featureEffectiveDate: , featureMRC: } end offsers.each do |key, offer| offer[:ratePlan] = offer[:ratePlan].values end { PriceList: { features: { feature: features.values } ChannelOrgType: OCM, Region: ALL, product: [], businessRelationship: , standardOffer: offers.values } }endStill not pretty (it's a tad over the 5-10 line limit you might strive for for methods in Ruby).It might be nice to break things into methods like feature_hash(name, cid), which would generate the separate hashes.require csvTERMS = %w{pp 01 12 24 36}.freezedef generate(fname) puts generating doc for #{fname} offers = {} features = {} CSV.foreach(fname, headers: true, skip_blanks: true) do |row| # If the CSV file has a proper header line, you can also access cells by their name _region, offer_cid, _offer_code, offer_name, product_cid, product_name, component_name, component_cid = *row.fields offers[product_cid] ||= { offerType: product_name ratePlan: {} } key = #{offer_name}#{offer_cid} offers[product_cid][:ratePlan][key] ||= rate_plan_array(offer_name, offer_cid) features[component_cid] ||= feature_hash(component_name, component_cid) end offsers.each do |key, offer| offer[:ratePlan] = offer[:ratePlan].values end { PriceList: { features: { feature: features.values } ChannelOrgType: OCM, Region: ALL, product: [], businessRelationship: , standardOffer: offers.values } }enddef rate_plan_array(name, cid) TERMS.map do |term| { ratePlanName: offer_name, ratePlanCode: offer_cid, ratePlanTerm: term, offerEffectiveDate: , ratePlanMRC: } endenddef feature_hash(name, cid) { featureName: name, featureCode: cid, featureType: FEATURE, featureTerm: , featureEffectiveDate: , featureMRC: }endEither way you cut it, though, it probably won't win any beauty contests.
_webmaster.7697
Is any one aware of any data or studies from an impartial source that show the impact of EV SSL certificates on customer behavior? I've been unable to find any such studies. If an EV SSL certificate increases sales on a web store front by even a few points, I can see the value. Aside from data targeted at EV SSL it may be possible to guess at customer behavior based on user interaction with regular SSL certificates. Are users even aware of SSL security? Does regular SSL have any proven effect on web store front sales?Note, that I'm not asking about the necessity of good encryption - I'm asking about a potential customer's perception of security & trust.
EV SSL Certificates - does anyone care?
https;security certificate
null
_unix.36693
I recently downloaded something on my Redhat Linux computer, and it told me I was out of space. I checked my disk usage, and it says I've used 100% of my folder, but as you can see I still have almost 900Gb available. How can I reallocate some of that space to my user?Here is a screenshot of my disk usage:http://i.imgur.com/o2CzK.pngI know this may be a basic question, but I can't find a way to give myself more space. Also, I have root access.Please let me know if anything else is needed.EDIT: output of df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/VolGroup00-LogVol00 901G 3.5G 852G 1% /tmpfs 1.9G 1.2M 1.9G 1% /dev/shm/dev/sda3 194M 31M 154M 17% /boot/dev/sda5 4.0G 3.7G 105M 98% /home
RedHat Linux Space Usage Problem
rhel;hard disk;disk usage
You have a tiny /home partition (where your home directory is) and a huge system (/, root) partition.You can create a directory for yourself outside /home, for example /LARGE/sortinghat, and create a symbolic link in your home directory:ln -s /LARGE/sortinghat/LARGE ~/LARGEPut all your large files under ~/LARGE, and keep your most critical files in /home. This kind of setup is cumbersome, but can be useful if you have different backup policies for /home (small, critical data, often backed up) and /LARGE (huge, less critical data, backed up occasionally).Alternatively, move /home to the root partition. This is best done from a live CD/USB as moving your home directory while you're logged in is delicate at best. You might take this opportunity to remove the /home partition and enlarge the root partition by that much:Make /dev/sda5 an LVM physical volume (pvcreate).Add it to the VolGroup00 volume group (vgextend).Enlarge the LogVol00 logical volume (lvextend).Enlarge the filesystem on it (resize2fs)).Alternatively, shrink / and enlarge /home. You can't shrink a mounted filesystem, so boot from a live CD/USB.Shrink the filesystem (resize2fs).Shrink the logical volume (lvreduce).Create a new logical volume in the free space (lvcreate).Create a filesystem on the new logical volume (mke2fs).Move your data from the existing home partition to the new filesystem.Turn /dev/sda5 into an LVM physical volume and add it to the VolGroup00 volume group (pvcreate, vgextend).Add the new space to one of the existing logical volumes (lvextend) and extend the filesystem on it (resize2fs).
_softwareengineering.118758
I'm a proponent of properly documented code, and I'm well aware of the possible downsides of it. That is outside of the scope of this question.I like to follow the rule of adding XML comments for every public member, considering how much I like IntelliSense in Visual Studio.There is one form of redundancy however, which even an excessive commenter like me is bothered by. As an example take List.Exists()./// <summary>/// Determines whether the List<T> contains elements/// that match the conditions defined by the specified predicate./// </summary>/// <returns>/// true if the List<T> contains one or more elements that match the/// conditions defined by the specified predicate; otherwise, false./// </returns>public bool Exists( Predicate<T> match ){ ...}Summary and returns are basically saying the same thing. I often end up writing the summary more from a returns perspective, dropping the returns documentation altogether.Returns true when the List contains elements that match the conditions defined by the specified predicate, false otherwise.Additionally, the returns documentation doesn't show up in IntelliSense, so I rather write any immediately relevant information in summary.Why would you ever need to document returns separately from summary?Any information on why Microsoft adopted this standard?
Should a method comment include both a summary and return description when they're often so similar?
documentation;comments;intellisense
You can infer one from another, but those two sections remain separate, because it helps to focus on the one which interests the person when reviewing/using the code.Taking your example, I would rather write:/// <summary>/// Determines whether the List<T> contains elements that match the conditions/// defined by the specified predicate./// </summary>/// <returns>/// A value indicating whether at least one element matched the predicate./// </returns>public bool Exists(Predicate<T> match){ ...}If I'm reviewing this code and I want to know what the method does, I read the summary, and that's all what I care about.Now, let's imagine I'm using your method and the return value I receive is strange, given the input. Now, I don't really want to know what the method does, but I do want to know something more about the return value. Here, <returns/> section helps, and I don't need to read the summary.Again, in this example, you can infer the summary from <returns/>, and infer the expected return value from the summary. But taking the same argument to the extreme, there is no need to document your method at all in this case: the name of the method, put inside List<T>, with Predicate<T> match as a sole argument is quite explicit itself.Remember, the source code is written once but read plenty of times. If you can reduce excise for the further readers of your code, while spending ten seconds writing an additional sentence in the XML documentation, do it.
_unix.256655
I've damaged modification dates of files on my drive and I wanted to revert them using snapper snapshots but it doesn't seem to work, snapper only reverts content of files not modification dates. However in .snapshot directory files have proper, old timestamps. How do i completely restore disk state to certain snapshot so that all timestamps are the same as ones in stored snapshot?
Snapper - how to undo changes, preserving timestamps?
filesystems;data recovery;btrfs;snapshot;file metadata
null
_cs.75760
I'm simulating a procedure that assigns tasks to servers and want to estimate the average waiting time until a task is served (finds a free server).This procedure runs periodically, thus every task that is rejected in a run can try in the next runs until it finds a free server.The inter-arrival times of tasks follow an exponential distribution.Between runs, some tasks may finish.Is there a way to estimate the average waiting time of tasks?
Estimation of average waiting time
algorithm analysis
If tasks arrive faster than they can be dealt with, average waiting time is unbounded. You can probably adapt the PollaczekKhinchine formula to give an analytic answer to your question: $$L = \rho + \frac{\rho^2 + \lambda^2 \operatorname{Var}(S)}{2(1-\rho)}$$where$L$ is the mean queue length;$\lambda$ is the arrival rate of the Poisson process;$1/\mu$ is the mean of the service time distribution $S$;$\rho={\lambda \over \mu}$ is the utilization; and$Var(S)$ is the variance of the service time distribution $S$.
_unix.82997
I have been wondering how all these small nas boxes that run linux can share over network and usb. The network part is totally under control, but I am completely at a loss on how I could hook up a computer to my server through USB cable, and get a share.Is this done with some specific hardware or is this done through software ?
Create my own disk server
linux;file sharing;disk
Most of the NAS' that I've encountered make use of Samba and then share these USB mounted disks out as Samba shares.Rules like this can be put in your /etc/fstab file:$ blkid/dev/sda2: LABEL=OS UUID=DAD9-00EF TYPE=vfat This line can be adapted to this:/dev/sda2 /export/somedir ntfs defaults 1 2Once this USB drive is mounted at boot up, Samba can be used to share out /export/somedir.# /etc/samba/smb.conf[xbox_videos] comment = Videos for Xbox path = /export/somedir browseable = yes; available = yes guest ok = no; read only = yes public = yes inherit permissions = yes writeable = yes hosts allow = 192.168.0. 192.168.1. localhost
_unix.350794
I am failing to add comments for multi-line statements in bash script. Seems that bash is not interpreting.Since comments can really be useful because there are potential 4-5 lines, can anyone advice me how to achieve this?This is just basic example, which is - not working.#!/bin/bashiptables -A INPUT \#Comment for rule bellow-p tcp --dport 21 \# Comment for rule bellow no2-s 10.0.0.1 \-j ACCEPTI just give a plain example. Allowing comments can be more easier for complex examples (this is not complicated example, but you got a point) like this:grep some_file \#awk does that...awk '{print $1}' \#sed does that...sed 's/match1/match2/g' Of course there are no whitechars behind \.
How to add comments on multiline statements on Bash script?
bash;shell script;shell
A line broken into several lines by escaping the newline is still only one line.A comment stretches from the # to the end of the line, no matter if that line is broken into many lines or not.What the shell parses when you writeecho hello \# worldisecho hello # worldThis is different though (and works):grep hello |# now sed:sed 1pSince each part of the pipeline is complete on its own line, it's possible to intermingle the lines with comments, as long as the newlines are not escaped.
_codereview.97148
I am writing a piece of code in c# that mimics the behaviour of following SQL function STR in the following case:declare @number numeric(19,15) = 219.37345462345234235SELECT str(@number,9,9) -- 219.37345declare @number numeric(19,15) = 19.37345462345234235SELECT str(@number,9,9) -- 19.373455Following is an input output table for this scenario:I have implemented the following function in C# which accepts a double variable and returns a string which is equal to the result returned by the SQL function STR(@number,9,9).public static string GetCustomStringQuantity(double quantity) { string customQuantity ; int length = (int) Math.Log10(Math.Abs(quantity))+1; if (length>9) return *********; quantity = Math.Round(quantity, 8 - length,MidpointRounding.AwayFromZero); customQuantity = quantity.ToString(); customQuantity= (quantity %1==0)?(customQuantity+.).PadRight(9, '0'):customQuantity.PadRight(9, '0'); if (customQuantity.Length > 9) customQuantity = customQuantity.Substring(0, 9); else if (customQuantity.Length == 9 && customQuantity.EndsWith(.)) customQuantity = customQuantity.Substring(0, 8); return customQuantity; }How can I make this function better? Or is there a better way to implement this functionality?
Implementing SQL function STR
c#;performance;strings;sql;formatting
null
_unix.297884
Is there a way to ensure less clears screen on exit? The opposite of less -X. The screen is not being cleared when I exit a man page in iTerm2, however the screen is cleared when using the default mac terminal. Does anyone have suggestions?$LESS is set to less -R
Ensure less clears screen
osx;man;less;iterm
Normally less clears the screen (which probably refers to switching back to the normal screen from the alternate screen) when the terminal description has the appropriate escape sequence in the rmcup capability.You would see a difference if you are using different values of TERM in the two programs. The infocmp program can show differences for the corresponding terminal descriptions.less also attempts to clear the remainder of the screen, but that depends upon whether anything was displayed, and if the output was a terminal (in contrast to a pipe).Aside from the terminal description, some terminal emulators make it optional whether to allow the alternate screen. You may have selected that option at some point. (I'm testing with default configuration, which works as intended).
_softwareengineering.352933
I recently got into a slight argument with the other developer on my team about the correct way to develop with SVN and Eclipse. My way of developing in multiple branches is by creating new workspaces for each branch. The reason I create new workspaces for each branch is so that I can separate features/bug fixes from each other and from trunk. However, this doesn't mean that I am only supportive of people who use this methodology.I don't care about how others use SVN as long as they put proper commit messages in the logs so that if anything happens (be it me or anybody else) we could always rollback.The other developer on my team has a strong believe that the way I am creating new workspaces for each branch is NOT the way SVN was intended for use and grilled me pretty hard on it. They told me you're not using SVN the way it was intended. The way they use it is by using a single workspace and switching around the directory between branches/trunk during development.Although I feel this can get messy and confusing quite fast especially in our work environment where we work on multiple branches at once, it is one way people can develop.Can someone put some clarity into the way I'm supposed to think about developing with multiple branches? Or am I correct as in there is no right or wrong way to do it and just different people prefer different methods?Thank you
How to properly develop in different SVN branches with Eclipse IDE?
svn;branching;ide
null
_opensource.4571
I have heavily used open source software for several years, but now that I'm finally ready to dive in as a developer, I found myself not even knowing this very basic thing: what do maintainers do?As their name suggests, maintainers probably maintain software by fixing problems, bugs, etc. They are also often project or sub-project leaders, and do the official release builds, etc. However it seems some maintainers also add new features besides just maintaining old ones.What are the general things that open source software maintainers do? Is it just another word for project leaders?
What do open source software maintainers do?
project management
It may be easier to think about this in a negative sense, i.e. what are the responsibilities of someone who contributes to a project but isn't a maintainer?If I submit a patch to someone else's project and that patch gets merged in, I am an author of the project. However, I may never look at the project ever again.Maintainers, then, are the people who do anything and everything beyond that. What they specifically do depends on the project, their interests, and their schedule, but generally speaking they are responsible for anything that needs to be done (even if they don't do the actual work and just ensure that someone else does it).
_unix.86296
I have read that you should not store /tmp on a SSD, because the frequent writes will shorten the lifetime of the SSD. But what about /var/tmp?Is it reasonable for /var/tmp to be stored on a SSD? Or should we avoid having /var/tmp stored on a SSD, to avoid killing the SSD prematurely?
Is it OK to store /var/tmp on a SSD?
directory structure;ssd;tmp
While it is true that all flash based storage devices have a limited number of writes before the transistor insulation breaks down, it's not as bad as it was when SSDs were first introduced years ago.Basically due to the fact that most modern SSD's employ wear leveling and are based on NandFlash, burning through a drive is not a problem like it used to be. You shouldn't need to worry about it. A SSD with constant writes will still outlast any rotary hard drive. Resourceshttp://www.storagesearch.com/ssdmyths-endurance.html http://maxschireson.com/2011/04/21/debunking-ssd-lifespan-and-random-write-performance-concerns/ http://www.tomshardware.com/forum/267303-32-what-write-limit-ssds
_unix.124394
I am following this article and generated an initramfs.cpio file. Now the tutorial mentiones that I have to put this file in build directory: Finally, you have to rebuild the kernel again:# 0. Copy the CPIO archive to your kernel build directory:cp initramfs.cpio .But I dont see any folder build:[root@xilinx linux-xlnx]# ls arch CREDITS drivers include Kbuild lib mm README scripts System.map virtblock crypto firmware init Kconfig MAINTAINERS Module.symvers REPORTING-BUGS security tools vmlinuxCOPYING Documentation fs ipc kernel Makefile net samples sound usr vmlinux.oWhat is the correct place then? my board is xilinx Zynq based on ARM
Correct location of initramfs.cpio file when compiling kernel
kernel;initramfs
null
_unix.211442
I'm learning about iptables, firewalling, routing and so on. I'm on Linux, Centos7, and I've set up a local port forwarding to localhost with:firewall-cmd --add-forward-port=port=2023:proto=tcp:toport=22It is working as expected, trying from another machine. Locally, is not visible. I've tried with netstat and ss, nmap lsof and nc. Nothing, all of them sees everything except the 2023, even if it is currently forwarding an ssh session.After much reading, here on stackexchange I found a way to make it visible locally, (from iptables: redirect local request with NAT), but actually that is not a solution, it just made me understand why is not visible from local, but I really would like to know if exists a way to check it locally.. Or the only option is the remote connection?Thank you :)Edit:The set up of the test machine is easy, just execute the firewall-cmd line I wrote in this question. No other rules added. Then test it with ssh (ore nmap) from outside: works. Check it from localhost itself: both ssh and nmap gives connection refused.Edit2:Sorry, I wrote the firewall-cmd line incorrectly with a :toaddr=127.0.0.1 at the end, fixed.
How check a port forwarded from localhost to localhost on.. localhost?
networking;iptables;port forwarding;nat;firewalld
null
_softwareengineering.55455
In order to view system performance, I have been asked by management to give page response times for a few key pages. I want to make sure I am giving a good picture of the overall health of the system, and not just narrowing in on a single measurement.So my question is: When developing software, what metrics would you provide to your stakeholders to indicate a system that is healthy and running well?(if it is not running well, that should also be evident! Not trying to hide/obscure any problems.)
System response times --- A good Service Level Agreement?
.net;performance;metrics;iis
null
_softwareengineering.322131
I use jquery and I would like to know if it is possible to put a tag in an html file and the text in this tag will take the value from a file?like<li id=report role=presentation data-toggle=tab><a href=#><span class=glyphicon glyphicon-stats></span> <i18n tag=report /></a></li><li id=setup role=presentation data-toggle=tab><a href=#><span class=glyphicon glyphicon-wrench></span> <i18n tag=setup /> </a></li>message.propertiesreport=Reportsetup=Setupmessage_fr.propertiesreport=Rapportsetup=Configurationany similar to that?
search library for internationalization
javascript
null
_codereview.10163
string Format(string format_string, T1 p1, T2 p2, ..., TN pn)The Format() function takes a copy of format_string and replace occurrences of ___ with the remaining parameters to the function (p1, p2,...). The first occurrence of ___ will be replaced by p1, the next by p2, and so on. If there are no occurrences of ___ the remaining parameters will be appended to the string delimited by spaces. If the parameters are not strings they will be converted to strings with operator<<(ostream,x). Format will return the modified string.For example:Format(The ___ is ___ years, fox, 8, old)evaluates to:The fox is 8 years oldThe implementation follows:struct FormatEmptyStruct {};const string FormatPlaceholder(___);template<class T>inline FormatEmptyStruct OStreamWriteT(ostringstream& os, const std::string& sFormat, string::size_type& iCurrentPos, const T& t){ auto iNextPos = sFormat.find(FormatPlaceholder, iCurrentPos); if (iNextPos == std::string::npos) { os.write(sFormat.data() + iCurrentPos, sFormat.size() - iCurrentPos); iCurrentPos = sFormat.size(); os << ; os << t; } else { os.write(sFormat.data() + iCurrentPos, iNextPos - iCurrentPos); os << t; iCurrentPos = iNextPos + FormatPlaceholder.size(); } return {};}struct EmptyStruct {};inline string Format() { return ; }template <class... Args>inline string Format(const string& sFormat, const Args&... args){ ostringstream os; string::size_type iCurrentPos = 0; initializer_list<FormatEmptyStruct>{ OStreamWriteT(os, sFormat, iCurrentPos, args)... }; if (!sFormat.empty()) os.write(&sFormat.front() + iCurrentPos, sFormat.size() - iCurrentPos); return os.str();}
String format function
c++;strings;c++11;formatting
null
_webmaster.33395
I want to know the previous hosting IP address of an expired domain. Is there any way find this?
Find the IP address of expired domains
web hosting;dns
null
_codereview.168778
In my initial problem posted on SO, whenever anyone accessed the Mail.php on the server, it used to send an empty email to the $to. To avoid this I came up with a solution.require 'PHPMailer/PHPMailerAutoload.php'; $yourName = $_POST['yourName']; $sender = $_POST['emailID']; $subject = $_POST['subject']; $message = $_POST['message']; $to = '[email protected]'; if(empty($yourName) || empty($sender) || empty($subject) || empty($message) || empty($message)) { echo Fields are empty; } else { $mail = new PHPMailer; //$mail->SMTPDebug = 2; $mail->isSMTP(); $mail->Host = 'smtp.gmail.com'; $mail->SMTPAuth = true; $mail->Username = 'gmailUser'; $mail->Password = 'gmailPassword'; $mail->SMTPSecure = 'tls'; $mail->Port = 587; $mail->setFrom($sender,$yourName); $mail->addAddress($to); $mail->addReplyTo($sender); $mail->isHTML(true); $mail->Subject = $subject; $mail->Body = <b>From: </b>. $sender. <br> . <b>Name: </b>. $yourName. <br>. <b> Message Body </b> .$message; $mail->AltBody = <b>From: </b>. $sender. <br> . <b>Name: </b>. $yourName. <br>. <b> Message Body </b> .$message; if(!$mail->send()) { echo 'Message could not be sent.'; echo 'Mailer Error: ' . $mail->ErrorInfo; } else { echo Message has been sent....You're being redirected.....; } }This fix, basically allows the user to interact with Mail.php, but Mail.php doesn't sends a null email to $toNow hours later, I find out that this is a very bad approach. I would like to know how a professional would solve this problem?Any good approach which I could use to efficiently optimize the code?
Validation before sending mail in PHP
php;validation;email
null
_vi.6340
I'm trying to change between the active buffer and the alternate buffer with the command Ctrl + ^ as this video explains. However, nothing happens. I'm using a Spanish keyboard and to press the ^ accent, I have to hit Shift therefore I have to type Ctrl + Shift + ^. As I said nothing happens so is there any other way to do this or could I just map other keys?Edit: Here you can see the keyboard .
Changing between the active buffer and the alternate buffer
key bindings;buffers
CTRL-6 can be used to edit the alternate file - tested on OS X by changing the input source from U.S. to Spanish ISO.If you find yourself wanting to edit the alternate file often, you could consider mapping backspace to edit the alternate file like this: 'edit alternate file' convenience mappingnnoremap <BS> <C-^>
_scicomp.8953
How does Matlab optimization tools works? It just gets the error function and doesn't need Jacobian (first derivatives) or Hessian (second derivatives)? How it is possible? If it is finite difference how it determines the value of dx for different degree of freedom?
How Matlab optimization works without Jacobian or Hessian
optimization;matlab;jacobian
The answer depends very much on what optimization function in MATLAB you're talking about. The base MATLAB system includes a function fminsearch that uses the simplex method (not the simplex method for LP but rather the Nelder-Mead-Lagarias algorithm of the same name.) This method does not make use of derivatives. The functions of the optimization toolbox use a variety of different methods depending on the nature of the problem and its size. There are different options for problems ranging from small sized to very large, and there are different options depending on the nature of the problem (e.g. constrained vs. unconstraints.) For unconstrained nonlinear optimization of problems up to medium size, the toolbox uses the BFGS quasi-Newton method and can automatically compute finite difference approximations.
_webapps.14960
When using both Gmail and Google Apps (with my own domain) for email, how can I get the Gmail account to send outgoing mail through [email protected] and have the [email protected] go through its own outbound server?I want to have each of the accounts send and deliver from their respective accounts.Currently they both are going out through the same server/domain, in this case [email protected].
Send Gmail and Google Apps outbound emails through respective account servers
gmail;google apps
null
_webmaster.60542
I have an A record in GoDaddy pointing to my web server. I want to move this now to Route 53. The last time I did this it took almost 48 hours to propagate. I think because of the order I did things.This is the process I think should hopefully minimize downtime:Create hosted zone for my domain to get the 4 name servers that I'll use as my delgation set.Add the 4 name servers to GoDaddy. GoDaddy has 2 other Informational name servers defined that look like you can't delete them?Now in Route 53 add a record set. Create A record to IP of my web server.Wait to see if dns propagation has happened. With dig?Remove A record from GoDaddy.Will this process work with minimal or no downtime? Will dig tell me enough information to see that it is safe to remove the A record from GoDaddy?
Minimize downtime moving from godaddy to route 53
dns;godaddy;route53
The first thing you want to do, is reduce the TTL (Time to Live) on your DNS records to be as small as acceptable. This will keep cached DNS servers from holding onto stale data for too long when you switch.Next, I assume your web site is going to be moving to a new IP address. There is no reason switching the site to a new host IP address has to happen at the same time as your DNS changes. Load the site up at the new host, and in the old (current) provider's DNS, change the A record to the new IP. If you really require a hard switchover, at the old provider you can set up forwarding to the new IP.In your new provider's DNS records (which are not yet authoritative) also set up the A record (and any others you need).Now your site is (almost instantly) running at the new host.However your DNS is still being handled by the old host, and as soon as you stop paying them, they might remove your entries. ;)At this point you can submit to change the authoritative NameServers from the old host to the new host. It doesn't really matter when the change goes through (or if it's cached), because BOTH hosts' DNS servers are hosting up the same IP address information anyway.After the change is complete, remember to raise the TTL back up to a reasonable time to lessen stress on the root DNS servers.
_unix.330257
I wanted to find whether I am inside a chroot as part of a script where I'm not allowed to mount /proc if it is not (unlike the case in How do I tell I'm running in a chroot?). How to find whether I'm in a chroot even if /proc is not mounted?I do have root access. This is on Fedora. The solution should not depend on the filesystems used.
How do I tell I'm running in a chroot if /proc is not mounted?
linux;mount;chroot
null
_unix.158140
I have a Benq 5150C scanner. It didn't work out of the box.After downloading the windows driver (from Benq Middle East), extracting the .bin file and following this post, the scanner started working but only scanning 1/4 of the page. It seems there's a problem with resolution.Is there a way to configure it to work properly?Using Ubuntu 14.04 64 bit.
Benq 5150C scanning 1/4 of page
drivers;scanner;sane
null
_hardwarecs.7048
It's simple enough to imagine. I have a phone dock that has no connector of its own, but a place to thread a cable into it. I have a USB C male-to-male cable that goes from the wall charger to this dock. In order to preserve the cable's interior, I want to attach to the end of it a right-angle-adapter so it can be pointed up and into the phone's charging port without bending the cable, itself.However, after many hours of searching between me and two other friends, we have come up dry. Does such a thing exist, yet?It's worth noting we did find some right angle adapters that turned sideways, but this won't work for this setup; it needs to turn up or down, not left or right.
Male USB C to Female USB C right-angle adapter
usb
null
_unix.78008
If the file structure is like this:/a/p/c/d.../c/a/c/g/f/.../a/c/d/e/......And I want to do this: find -mindepth 3 -type d -name p -prune -or -name c -printHowever, this command will not prune the 'p' directory and the first line will included. I know it's actually not a conflict. But how to prune 'p' with the mindepth applied?
The find command: the options '-mindepth' conflicts with the action '-prune'
find
I think I would do it this way:find -mindepth 3 -type d ! -path '*/p/*' -name c -printBased on @StephaneChazelas' feedback I believe this method would eliminate the extraneous searching into any /p/ directories:find -mindepth 3 -type d -path '*/p/*' -prune -o -name c -printAnalyzing a findTo compare find commands you can add the debug switch -D search so that you can see how a particular find would perform vs. another.I ran @StephaneChazelas' command vs. mine to see where the differences were. The 2 commands are run and their output is run into an sdiff below:$ sdiff \<(find -D search -mindepth 3 -type d -path '*/p/*' -prune -o -name c -print 2>&1) \<(find -D search -type d -name p -prune -o -path './*/*/*' -name c -print 2>&1)consider_visiting (early): `.': fts_info=FTS_D , fts_level= 0 consider_visiting (early): `.': fts_info=FTS_D , fts_level= 0consider_visiting (late): `.': fts_info=FTS_D , isdir=1 ignor | consider_visiting (late): `.': fts_info=FTS_D , isdir=1 ignorconsider_visiting (early): `./a': fts_info=FTS_D , fts_level= consider_visiting (early): `./a': fts_info=FTS_D , fts_level=consider_visiting (late): `./a': fts_info=FTS_D , isdir=1 ign | consider_visiting (late): `./a': fts_info=FTS_D , isdir=1 ignconsider_visiting (early): `./a/c': fts_info=FTS_D , fts_leve consider_visiting (early): `./a/c': fts_info=FTS_D , fts_leveconsider_visiting (late): `./a/c': fts_info=FTS_D , isdir=1 i | consider_visiting (late): `./a/c': fts_info=FTS_D , isdir=1 iconsider_visiting (early): `./a/c/d': fts_info=FTS_D , fts_le consider_visiting (early): `./a/c/d': fts_info=FTS_D , fts_leconsider_visiting (late): `./a/c/d': fts_info=FTS_D , isdir=1 consider_visiting (late): `./a/c/d': fts_info=FTS_D , isdir=1consider_visiting (early): `./a/c/d/e': fts_info=FTS_D , fts_ consider_visiting (early): `./a/c/d/e': fts_info=FTS_D , fts_consider_visiting (late): `./a/c/d/e': fts_info=FTS_D , isdir consider_visiting (late): `./a/c/d/e': fts_info=FTS_D , isdirconsider_visiting (early): `./a/c/d/e': fts_info=FTS_DP, fts_ consider_visiting (early): `./a/c/d/e': fts_info=FTS_DP, fts_consider_visiting (late): `./a/c/d/e': fts_info=FTS_DP, isdir consider_visiting (late): `./a/c/d/e': fts_info=FTS_DP, isdirconsider_visiting (early): `./a/c/d': fts_info=FTS_DP, fts_le consider_visiting (early): `./a/c/d': fts_info=FTS_DP, fts_leconsider_visiting (late): `./a/c/d': fts_info=FTS_DP, isdir=1 consider_visiting (late): `./a/c/d': fts_info=FTS_DP, isdir=1consider_visiting (early): `./a/c': fts_info=FTS_DP, fts_leve consider_visiting (early): `./a/c': fts_info=FTS_DP, fts_leveconsider_visiting (late): `./a/c': fts_info=FTS_DP, isdir=1 i consider_visiting (late): `./a/c': fts_info=FTS_DP, isdir=1 iconsider_visiting (early): `./a/p': fts_info=FTS_D , fts_leve consider_visiting (early): `./a/p': fts_info=FTS_D , fts_leveconsider_visiting (late): `./a/p': fts_info=FTS_D , isdir=1 i | consider_visiting (late): `./a/p': fts_info=FTS_D , isdir=1 iconsider_visiting (early): `./a/p/c': fts_info=FTS_D , fts_le | consider_visiting (early): `./a/p': fts_info=FTS_DP, fts_leveconsider_visiting (late): `./a/p/c': fts_info=FTS_D , isdir=1 <consider_visiting (early): `./a/p/c': fts_info=FTS_DP, fts_le <consider_visiting (late): `./a/p/c': fts_info=FTS_DP, isdir=1 <consider_visiting (early): `./a/p': fts_info=FTS_DP, fts_leve <consider_visiting (late): `./a/p': fts_info=FTS_DP, isdir=1 i consider_visiting (late): `./a/p': fts_info=FTS_DP, isdir=1 iconsider_visiting (early): `./a': fts_info=FTS_DP, fts_level= consider_visiting (early): `./a': fts_info=FTS_DP, fts_level=consider_visiting (late): `./a': fts_info=FTS_DP, isdir=1 ign consider_visiting (late): `./a': fts_info=FTS_DP, isdir=1 ignconsider_visiting (early): `./c': fts_info=FTS_D , fts_level= consider_visiting (early): `./c': fts_info=FTS_D , fts_level=consider_visiting (late): `./c': fts_info=FTS_D , isdir=1 ign | consider_visiting (late): `./c': fts_info=FTS_D , isdir=1 ignconsider_visiting (early): `./c/a': fts_info=FTS_D , fts_leve consider_visiting (early): `./c/a': fts_info=FTS_D , fts_leveconsider_visiting (late): `./c/a': fts_info=FTS_D , isdir=1 i | consider_visiting (late): `./c/a': fts_info=FTS_D , isdir=1 iconsider_visiting (early): `./c/a/c': fts_info=FTS_D , fts_le consider_visiting (early): `./c/a/c': fts_info=FTS_D , fts_leconsider_visiting (late): `./c/a/c': fts_info=FTS_D , isdir=1 consider_visiting (late): `./c/a/c': fts_info=FTS_D , isdir=1consider_visiting (early): `./c/a/c/g': fts_info=FTS_D , fts_ consider_visiting (early): `./c/a/c/g': fts_info=FTS_D , fts_consider_visiting (late): `./c/a/c/g': fts_info=FTS_D , isdir consider_visiting (late): `./c/a/c/g': fts_info=FTS_D , isdirconsider_visiting (early): `./c/a/c/g/f': fts_info=FTS_D , ft consider_visiting (early): `./c/a/c/g/f': fts_info=FTS_D , ftconsider_visiting (late): `./c/a/c/g/f': fts_info=FTS_D , isd consider_visiting (late): `./c/a/c/g/f': fts_info=FTS_D , isdconsider_visiting (early): `./c/a/c/g/f': fts_info=FTS_DP, ft consider_visiting (early): `./c/a/c/g/f': fts_info=FTS_DP, ftconsider_visiting (late): `./c/a/c/g/f': fts_info=FTS_DP, isd consider_visiting (late): `./c/a/c/g/f': fts_info=FTS_DP, isdconsider_visiting (early): `./c/a/c/g': fts_info=FTS_DP, fts_ consider_visiting (early): `./c/a/c/g': fts_info=FTS_DP, fts_consider_visiting (late): `./c/a/c/g': fts_info=FTS_DP, isdir consider_visiting (late): `./c/a/c/g': fts_info=FTS_DP, isdirconsider_visiting (early): `./c/a/c': fts_info=FTS_DP, fts_le consider_visiting (early): `./c/a/c': fts_info=FTS_DP, fts_leconsider_visiting (late): `./c/a/c': fts_info=FTS_DP, isdir=1 consider_visiting (late): `./c/a/c': fts_info=FTS_DP, isdir=1consider_visiting (early): `./c/a': fts_info=FTS_DP, fts_leve consider_visiting (early): `./c/a': fts_info=FTS_DP, fts_leveconsider_visiting (late): `./c/a': fts_info=FTS_DP, isdir=1 i consider_visiting (late): `./c/a': fts_info=FTS_DP, isdir=1 iconsider_visiting (early): `./c': fts_info=FTS_DP, fts_level= consider_visiting (early): `./c': fts_info=FTS_DP, fts_level=consider_visiting (late): `./c': fts_info=FTS_DP, isdir=1 ign consider_visiting (late): `./c': fts_info=FTS_DP, isdir=1 ignconsider_visiting (early): `.': fts_info=FTS_DP, fts_level= 0 consider_visiting (early): `.': fts_info=FTS_DP, fts_level= 0consider_visiting (late): `.': fts_info=FTS_DP, isdir=1 ignor consider_visiting (late): `.': fts_info=FTS_DP, isdir=1 ignor./c/a/c ./c/a/cIf you notice, there's a gap in Stephane's approach that mine doesn't have. Even with the prune. I think this shows that his method is avoiding extra work in walking into directories that it should otherwise be ignoring.
_datascience.21872
I am going throught GAN for image generation and I am using this article for reference. The author is creating a generator model which does this. and the generator model code is self.G = Sequential()dropout = 0.4depth = 64+64+64+64dim = 7# In: 100# Out: dim x dim x depthself.G.add(Dense(dim*dim*depth, input_dim=100))self.G.add(BatchNormalization(momentum=0.9))self.G.add(Activation('relu'))self.G.add(Reshape((dim, dim, depth)))self.G.add(Dropout(dropout))# In: dim x dim x depth# Out: 2*dim x 2*dim x depth/2self.G.add(UpSampling2D())self.G.add(Conv2DTranspose(int(depth/2), 5, padding='same'))self.G.add(BatchNormalization(momentum=0.9))self.G.add(Activation('relu'))self.G.add(UpSampling2D())self.G.add(Conv2DTranspose(int(depth/4), 5, padding='same'))self.G.add(BatchNormalization(momentum=0.9))self.G.add(Activation('relu'))self.G.add(Conv2DTranspose(int(depth/8), 5, padding='same'))self.G.add(BatchNormalization(momentum=0.9))self.G.add(Activation('relu'))# Out: 28 x 28 x 1 grayscale image [0.0,1.0] per pixself.G.add(Conv2DTranspose(1, 5, padding='same'))self.G.add(Activation('sigmoid'))self.G.summary()I understood most of the codes but I have doubts in these two functionsself.G.add(UpSampling2D())self.G.add(Conv2DTranspose(int(depth/number), 5, padding='same'))What is happeing in those layers?
How is the generator code works in a GAN?
machine learning;keras;convnet
null
_unix.145736
I am using Postfix + Dovecot + IMAP + Maildir and I am accessing my server from two different machines (desktop and Laptop), i.e. two clients (Thunderbird). The are sometimes running simultaneously, and sometimes only one or the other client is running.I have moved a message from from Inbox to Archive in one client, and then started the second client. In the second client, I see the message still in Inbox, and I see it in Archive as well.I have checked on my server (this is easy because I am using Maildir) and the message is no longer in the Inbox. It is in the Archive.So why does the second client show it bot in Inbox and Archive. Apparently, it has not realized, that the message has been moved while he was not running.Is there a problem with syncing?How can I make two clients accessing the same IMAP server behave orderly?I have been using IMAP through multiple clients before, although the server was Exchange (not under my control, so I have no idea how it was set up). This problem did not exist. Is there some setting in Dovecot ?
multiple clients accessing one email account over IMAP
email;postfix;thunderbird;imap;dovecot
null
_reverseengineering.15862
I've got a file, which is digitally signed, but I would like to know how (the method/format/...)A binwalk on the file reveals a JFFS2 archive (after an offset of 64 bytes... that's maybe something ?), which I can extract, edit and browse ; but nothing about any signature.Have you any experience with this kind of stuff ? Could the first 64 bytes be a signature ? What kind of format ? Is there any tools (openssl ?) alowing to detect a signature (in any possible format) in a file ?
Detect and find a digital signature in a file
firmware
null
_codereview.116945
I am practicing a problem in Cracking the Coding Interview. The problem is to remove duplicates in a linked list without the use of a buffer. I interpreted this as not using a list or any large data structure such as a hash to store unique nodes. My algorithm is inefficient I think. It iterates an anchor across the linked list. From that anchor, a second sub-iteration occurs which removes any nodes that is the same as the anchor. Thus, there are a total of two loops. I think the time complexity is either O(n^2) or O(nlogn)Any better algorithms are welcome. Since I am a novice in Python, please give me suggestions on my coding style.class Node(object): def __init__(self, data): self.data = data self.next = None def getData(self): return self.data def setNext(self, node): self.next = node def getNext(self): return self.nextclass LinkedList(object): def __init__(self, dataList): assert len(dataList) > 0 self.head = Node(dataList[0]) iterNode = self.head for i in range(1, len(dataList)): iterNode.setNext(Node(dataList[i])) iterNode = iterNode.getNext() iterNode.setNext(None) def printList(self): iterNode = self.head while iterNode is not None: print(iterNode.getData()) iterNode = iterNode.getNext() def removeDuplicates(self): assert self.head is not None anchor = self.head while anchor is not None: iterator = anchor while iterator is not None: prev = iterator iterator = iterator.getNext() if iterator is None: break if iterator.getData() == anchor.getData(): next = iterator.getNext() prev.setNext(next) anchor = anchor.getNext()dataList = [hello, world, people, hello, hi, hi]linkedList = LinkedList(dataList)linkedList.printList()linkedList.removeDuplicates()print(\n)linkedList.printList()Output:helloworldpeoplehellohihihelloworldpeoplehi
Removing duplicates from a linked list without a buffer
python;linked list
Sensible overloadingString representationThink about a native list like [1, 2, 3] would think it would be user-friendly if the repl behaved something like this?>>> [1, 2, 3]<__main__.list object at 0x7f9b399475c0>But your class behaves exactly like that:>>> LinkedList([1,2,3])<__main__.LinkedList object at 0x7f9b3a69e908>So instead of that weird printList(self) implement a __repr__ method for usability sake.IterationA list that you cannot iterate over makes no sense.In Python each container must expose a way to be iterated over with a for loop, like:>>> for i in LinkedList([1,2,3]): print(i)That does not work for your implementation.Manual looping like you do in printList is out of the question in Python. Such low level details should be written once in an __iter__ method and then forgotten.
_cs.71289
I am trying to understand some notes regarding the A-star algorithm. The example used is to show how the algorithm can be used as a (more efficient) alternative to Dijkstra's algorithm for finding shortest path. I am reading about A* search starting page 71 of the UK GCE Computing textbook, Rouse to find shortest path. I have also read the Wikipedia entry - but to no avail. Two things do not makes sense to me.(i) Where do the heuristic values for shortest distance come from? There is mention of straight roads but I don't understand this. I know heuristic is informed guess but why not values 1237, 978, 516, ... etc. or any other arbitrary, descending values for the heuristic distance still to go ?(ii) What is the significance and meaning of the heuristic must never make an over-estimate? How do we know that, and why is it important? Surely all the intermediate distances before the completion of the A* algorithm are over-estimates.Regards,Clive
Where does the heuristic come from in the A-star algorithm and how do we know it has the right properties?
algorithms;search algorithms;shortest path
null
_unix.156387
I was trying to install oracle linux 6.5 from this link. I downloaded the OracleLinux-R6-U5-Server-x86_64-dvd . Now while installing I didn't get any GUI as the link said I would get after test/skip menu. But I installed the server without GUI any way in the vbox. I have ubuntu-14.04 as host. But now when in terminal I can't connect the internet from the guest.I have selected bridged adapter in the settings.The main purpose of installing oracle was to use and practice the oracle database 12c1. I don't actually need internet as long as I can run sql commands in oracle server.As there is no GUI , I am really frustrated now.So if someone could tell me step by step command of how to1) connect to the internet 2) login to the oracle 12c server so that I can run sql commands.Non-GUI or GUI mode.
How to install and run oracle 12c in oracle linux 6.5?
oracle database;oracle linux
null
_unix.2313
I am creating a deb package of a product which is part open source and part proprietary. In order to reuse the built in functionality of some distributions like Ubuntu to monitor a list of repositories and update the package when a new version comes available I will probably create such a repository.The problem is that the proprietary part of the package depends on licenses which are valid for a limited interval versions (valid for all the versions in a course of a year). Which means that at some point of time when a new version is available it will be nice to at least warn the user that his license will not be valid for the new version.Is there a way to do that check and interact with the user? I see that there are scripts in the deb package itself that can be executed before installation, but I have no idea if they can interact with the user and abort the package installation.
How to implement conditional update of deb package
linux;packaging;debian
null
_softwareengineering.214616
I'm looking for an umbrella term for all the nitty-gritty requirements that it's helpful to have specified up front but which the client never thinks about in his excitement about the product.In the client's mind, they think of the headline requirements, the main user stories such as I need to be able to view the status of all equipment and I need to receive automatically any notification equipment failure. But at times the mundane ones are there but the client never mentions them until the result differs from his unspoken expectations: Well of course a user must be able to change his own password and Of course the user's session must time out after 30 mins Any development team can anticipate what's likely to be needed but is there a name for this category of requirements? Obvious/infrastructural/boring/details?
What to call requirements that are assumed/invisible/very obvious
requirements;specifications
null
_unix.190375
I currently have a RedHat 6.5 kvm running under a RedHat 6.5 host. I am attempting to make a connection to a Rational Apex licence server on the local network, however the connection times out before licence provision. I currently have a bridged network interface and have already setup packet forwarding, and am successfully able to ssh the VM Is there any particular testing that I should use to determine the issue?
Connection issues with KVM on RHEL6.5 to Licence Server
rhel;kvm
null
_softwareengineering.278499
I'm having a little trouble in understanding how exactly Open Source licenses actually work. I have programmed for a while, but just for my personal use and I wrote all the code I needed myself. As I'm thinking of making some apps for people and I need a fast XML parser, I thought I could use RapidXml. However, I'm not sure how it's going to be like under the license.So, here's my question: if I use some code as part of my application, without any modification, how does it's license affect me? Exactly as if I was modifying it and distributing it? Do I have to publish my entire code?Also, coding this app will require knowing the implementation of another open source project. Though, I'm going to write my code in a different language, i.e. I just need to know how it works for me to reproduce it. How does the license work here? (As a matter of fact, this one is GPLv3, and the other is not.)
Using Open Source code in a project
open source
There are lots and lots of different open source licenses which have lots and lots of different conditions and fine-prints. They all work differently.RapidXML is dual-licensed under the MIT license and Boost License, and you are allowed to choose freely under which license you want to use it. They are both so-called permissive open source license. A permissive license allows you to modify and/or use the software as part of a product which is licensed under any license conditions you want. You don't need to publish your sourcecode, even when you modify the library. The only condition is that you place the original copyright message with the license conditions in the credits of your product so the end-users know that it uses the RapidXML library.The GPL, however, is a different beast. It is a so-called copyleft or share-alike license, which means that any product which includes GPL-licensed code must also be licensed under the GNU GPL. However, when you reimplement features with completely original code and do not use any of the original code, you are creating an own work which is only bound to your license conditions. But be aware that a straight 1:1 translation into another programming language is a legal gray area. The GNU GPL FAQ says that this is not allowed:What does the GPL say about translating some code to a different programming language? Under copyright law, translation of a work is considered a kind of modification. Therefore, what the GPL says about modified versions applies also to translated versions. <advertising>When you have more questions about open source licenses, you might want to commit to the new proposal Open Source Stackexchange. It still needs people to commit to it so it can go into beta phase.</advertising>
_unix.385314
I want to apply the following command to all the files in the current directoryclustalw -align -infile=*here goes the input filename* -outfile=*here goes the output filename*I also want the output filename to be the same as the input plus .aln. For example:clustalw -align -infile=nexus0000.fna -outfile=nexus000.fna.alnThanks for the help!
Apply command to all files in a directory
shell script;files;scripting
You could use for loop and globbing by filename extension:for file in *.fna; do clustalw -align -infile=$file -outfile=$file.alndoneIf you want to use a single command, you can use find:find . -maxdepth 1 -type f -iname *.fna -exec clustalw -infile {} -outfile {}.aln \;
_datascience.16645
I'm trying to build a regression model, where I see which attributes are influencing the margin. My data set looks like something below.UserId | [products_bought] | revenue | [places_visited] | discount --> MarginIn the above mentioned schema a single user might have bought a set of products maximum of 4000 and places visited can be in hundreds.I tried to flatten the data but this is very sparse, are there any approaches to efficiently model this kind of problem
Modelling query in regression
classification;regression;feature selection;linear regression;dimensionality reduction
null
_softwareengineering.112407
When starting a new project, my boss always avoids to make fixed decisions. He is usually saying: ok, just start to write something and be as generic as possible. When you're finished we look how we continue. His argument is basically that you never know and agile development.To keep the question as generic as possible: what do you do if your boss doesn't like to make decisions?Just stick to it and write code that might undergo heavy refactoring and partial rewriting a few weeks later? Or keep on discussing until the boss does at least a few decisions? This is more or less my current strategy. Because it's like a law of physics, at some point something needs to be delivered. Either because the boss' boss wants to see results or because stuff is becoming ridiculous at some point.I also observe that my boss is criticizing nearly everything. Even suggestions that are based on his own...
What to do if boss always postpones major decisions about requirements and overall design?
design;project management;requirements;decisions
Build prototypesJust start drawing screens that don't do anything at first (presumably you have enough to do that?)You should be able to make it partially functional slowly, and eventually refactor some of the bad code when it becomes more clear what you are trying to do.It's a common problem that they don't know what they want until they see something and realize that's not what they want. I've found that when someone wants you to just start building 'a framework' or something 'generic' like what he is telling you, you are just going to get in trouble if you try. The frameworks are already written, you don't need to do that.
_unix.313023
How can I disable all the tooltips in RHEL7 using GNOME gui? Tooltips are full of bugs: it shows it at a window, and it stays at the top in another window!
Disable all the tooltips on a GNOME/RHEL7 Desktop!
rhel;gnome;desktop environment
null
_softwareengineering.219516
when using closure in other languages, it just feels nature, variables from outer scope are captured automatically , without the need of declaring such captures.in c++11, good to see we have closures, but why they make it so uncomfortable to work with, not only have to declaring the capture, but also read write controls?not likely implementation considerations, but what kind of philosophy is leading to this awkward spec ?
why c++11 define closure as a process of capturing variables
c++;c++11;lambda
null
_cs.65663
I am dealing with the following statement:the run time for Dijkstra's on a connected undirected graph with positive edge weights using a binary heap is $\Theta$ of the run time for Kruskal's using union-find.However, this is false and I fail to see why this is so. Dijkstra's algorithm when using binary heap is $O(|E|\log|V|)$. The same is true for Kruskal's algorithm as its complexity is $O(|E|\log|V|)$.Hence, can someone reconcile this difference? Are there certain edge cases for when this isn't true, hence making this statement false?
Run time of Dijkstra's compared to Kruskal's algorithm using union-find
graphs;algorithm analysis;runtime analysis
null
_scicomp.25169
I need to build a new desktop PC, where ab-initio DFT calculation going to be performed. I am searching for a CPU in value range 600 - 1000. I was thinking about six-core Intel Core i7-6850K or 8-core Intel Core i7-6900K. However Intel Core ix processors are desktop processors so they are optimized for multimedia application and games, I think there are some features I don't need. In case of DFT calculation the most important stuff are matrix-matrix multiplication and FFT. So I was thinking also about Intel Xeon processors. In the same price I can buy 8-12 - core CPU, but with lower frequency (Intel Xeon Processor E5-2630 v4). Now I dont know what to buy.I cannot find any DFT CPU benchmarks on the web. Can you help me please?EDIT:To make it more clear: In value range 600 - 1000 is better to buy intel i7 or intel xeon for DFT calculations. I am aware of GPU computing and that there are also server solutions, but I am not asking about that.
CPU for ab-initio DFT calculations
performance;density functional theory
null
_unix.368785
I have two devices /dev/sdc1 and /dev/sdc3./dev/sdc1 is mounted on / and /dev/sdc3 is mounted on /home.The problem is that the device sdc1 is full whereas sdc3 has a lot of spare space. I want to determine where all that space on sdc1 gone (I suspect installed packages. a lot of them).I tried to use du (and ncdu which makes this info more visual) but it calculates size of the whole tree (/home is nested on /) and I want to know only what's filling one of my devices. Is there a way to do that?To clarify: I need some way to calculate usage of a device, not just exclude some subdirectories (which may be applicable in my situation, but I believe it can be more complex than that)
Estimate file space usage on a device (not in a directory)
disk usage
To limit to a single device you need the -x parameter of du-x, --one-file-system skip directories on different file systemsA useful graphic front-end to du is xdiskusage.
_unix.26754
So mirroring is bad: 0:root@SERVER:/root # lslv -m hd2hd2:/usrLP PP1 PV1 PP2 PV2 PP3 PV30001 0209 hdisk30 0322 hdisk32 0002 0210 hdisk30 0323 hdisk33 0003 0211 hdisk30 0323 hdisk32 0004 0212 hdisk30 0324 hdisk33 0005 0213 hdisk30 0324 hdisk32 0006 0214 hdisk30 0325 hdisk33 0007 0215 hdisk30 0325 hdisk32 0008 0216 hdisk30 0326 hdisk33 0009 0217 hdisk30 0326 hdisk32 0010 0218 hdisk30 0327 hdisk33 0011 0219 hdisk30 0327 hdisk32 0012 0220 hdisk30 0328 hdisk33 0013 0221 hdisk30 0328 hdisk32 0014 0222 hdisk30 0329 hdisk33 0015 0223 hdisk30 0329 hdisk32 0016 0224 hdisk30 0330 hdisk33 0017 0225 hdisk30 0330 hdisk32 0018 0226 hdisk30 0331 hdisk33 0019 0227 hdisk30 0331 hdisk32 0020 0228 hdisk30 0332 hdisk33 0021 0229 hdisk30 0332 hdisk32 0022 0230 hdisk30 0333 hdisk33 0023 0231 hdisk30 0333 hdisk32 0024 0355 hdisk30 0338 hdisk32 0025 0356 hdisk30 0339 hdisk32 0026 0357 hdisk30 0340 hdisk32 0027 0001 hdisk32 0307 hdisk8 0028 0206 hdisk8 0305 hdisk43 0029 0207 hdisk8 0306 hdisk43 0:root@SERVER:/root # How can I fix this? I know that it's just a few steps, but I can't google it :\ [break the mirror, then move the pp from the wrong this to a good one then then unbreak the mirror? how?]oslevel: 6100-05-01-1016 AIX
How to fix bad mirroring on AIX?
aix;lvm
COMMAND LV LP COPYX HDISKmigratelp hd2/27/1 hdisk30so it was only 1 command to put the LP on the COPY1 to the good hdisk.
_unix.260901
I am new to learning Linux. I have installed Linux Mint 17 on my 32-bit laptop and bought a FriendlyARM Mini2440 development board to do some basic programming and learn concepts of Linux.However, I couldn't find any documentation on how to install the cross-compiler and toolchain for FriendlyARM Mini2440 on Linux Mint (I found it for Ubuntu, though). I am using this tutorial to start my system and have followed all the steps.My problem is that while I am able to install and execute the cross-compiler and toolchain correctly for the first time, after restarting, when I issue the command arm-none-linux-gnueabi-cc v, it gives me an error.How can I get FriendlyARM Mini2440 to work on Linux Mint?
Linux Mint + FriendlyARM 2440: Cannot install and execute cross-compiler and toolschain (arm-linux-gcc-4.4.3.tar.gz)
compiling;arm
In the link https://alselectro.wordpress.com/category/friendly-arm-mini2440/they suggest pasting the following line into /root/.bashrc:export PATH=$PATH:/opt/FriendlyArm/toolschain/4.4.3/binHowever, I didn't have the /root/.bashrc file in my Linux Mint, so I was getting an error for arm-none-linux-gnueabi-cc v.After much searching, I found that I can paste the path /root/.profile/ instead.After doing this, the arm compiler is initialized on start-up and seems to be working fine now.Although the procedures in the link are for FriendlyARM2440 + Ubuntu, I have tested all of them for Linux Mint 17, and except for this small change, all seem to work fine.
_unix.386148
The process of getting Putty communicating with OpenSSH has thwarted me for quite some time. I installed OpenSSH with:sudo apt-get install opensshThen I generated ssh keys with:ssh-keygen -t rsa -b 4096 -C my user hereThe above command moved the public and private key combo to my user's profile home .ssh directory (/home/myUser/.ssh) (I think I may have had to create the .ssh folder there in order for ssh-keygen to work properly)Then I copied the private key to Windows and tried to use it in Putty.The server kept denying me.
How do I communicate with OpenSSH on Linux from Putty on Windows?
openssh;putty
null
_codereview.33064
I am trying to setup an odds calculator for a best of 7 series, given independent odds for each event.The following code works, but I would like to add recursion to simplify the end.public class Game{ public int No { get; set; } public List<decimal> Odds; public Game(int no, decimal odd1, decimal odd2) { No = no; Odds = new List<decimal>(); Odds.Add(odd1); Odds.Add(odd2); } }void Main(){ var games = new List<Game>(); var homeodd = .6m; var awayodd = .4m; var winningOdds = 0m;//Add 7 games each with a different odd of winning game games.Add(new Game(1,homeodd,awayodd)); games.Add(new Game(2,homeodd,awayodd)); games.Add(new Game(3,awayodd,homeodd)); games.Add(new Game(4,awayodd,homeodd)); games.Add(new Game(5,awayodd,homeodd)); games.Add(new Game(6,homeodd,awayodd)); games.Add(new Game(7,homeodd,awayodd));//game one has 2 possible outcomes, 0 = win, 1 = loss =>same for all 7 games for (int i = 0; i < 2; i++) { for (int j = 0; j < 2; j++) { for (int k = 0; k < 2; k++) { for (int l = 0; l < 2; l++) { for (int m = 0; m < 2; m++) { for (int n = 0; n < 2; n++) { for (int o = 0; o < 2; o++) { if ((i+j+k+l+m+n+o)<4) //if we have loss less than 4 games, we have won the series and we want to add the odds of that possibility to the total odds { winningOdds += games[0].Odds[i] * games[1].Odds[j] * games[2].Odds[k] * games[3].Odds[l] * games[4].Odds[m] * games[5].Odds[n] * games[6].Odds[o]; } } } } } } } } Console.WriteLine(winningOdds); }
Optimizing odds calculator
c#;optimization;recursion
null
_codereview.82871
I am new to Python and I am writing my first utility as a way to learn about strings, files, etc. I am writing a simple utility using string replacement to batch output HTML files. The program takes as inputs a CSV file and an HTML template file and will output an HTML file for each data row in the CSV file. CSV Input File: test1.csvThe CSV file, which has header row, contains some catalog data, one product per row, like below:stockID,color,material,url340,Blue and magenta,80% Wool / 20% Acrylic,http://placehold.it/400275,Purple,100% Cotton,http://placehold.it/600318,Blue,100% Polyester,http://placehold.it/400x600HTML Template Input File: testTemplate.htmThe HTML template file is simply a copy of the desired output with string replace tags %s placed at the appropriate locations:<h1>Stock ID: %s</h1><ul> <li>%s</li> <li>%s</li></ul><img src='%s'>The Python is pretty straight forward I think. I open the template file and store it as a string. I then open the CSV file using the csv.dictreader() command. I then iterate through the rows of the CSV, build the file names and then write the output files using string replacement on the template string using the dictionary keys.import csv# Open template file and pass string to 'data'. Should be in HTML format except with string replace tags.with open('testTemplate.htm', 'r') as myTemplate: data = myTemplate.read() # print template for visual cue. print('Template passed:\n' + '-'*30 +'\n' + data) print('-'*30)# open CSV file that contains the data and store to a dictyionary 'inputFile'.with open('test1.csv') as csvfile: inputFile = csv.DictReader(csvfile) x = 0 # counter to display file count for row in inputFile: # create filenames for the output HTML files filename = 'listing'+row['stockID']+'.htm' # print filenames for visual cue. print(filename) x = x + 1 # create output HTML file. with open(filename, 'w') as outputFile: # run string replace on the template file using items from the data dictionary # HELP--> this is where I get nervous because chaos will reign if the tags get mixed up # HELP--> is there a way to add identifiers to the tags? like %s1 =row['stockID'], %s2=row['color'] ... ??? outputFile.write(data %(row['stockID'], row['color'], row['material'], row['url']))# print the number of files created as a cue program has finished.print('-'*30 +'\n' + str(x) + ' files created.')The program works as expected with the test files I have been using (which is why I am posting here and not on SO). My concern is that it seems pretty fragile. In 'production' the CSV file will contain many more columns (around 30-40) and the HTML will be much more complex, so the chances of one of the tags in the string replace getting mixed seems pretty high. is there a way to add identifiers to the tags? like %s1 =row['stockID'], %s2=row['color'] ...? that could be placed either in the template file or in the write() statement (or both)? Any method alternatives or improvements I could learn would be great (note I am well aware of the Makos and Mustaches of the world and plan to learn a couple of template packages soon.)
String replace templating utility
python;beginner;html
Python has a number of templating options, but the simplest to start is probably the string.Template one described in https://docs.python.org/3/library/string.html#template-stringsThis supports targets such as $StockId and is used as below>>> from string import Template>>> s = Template('$who likes $what')>>> s.substitute(who='tim', what='kung pao')'tim likes kung pao'If you need more output options, look at the string.format functionality, but this is probably best for starting with.
_unix.273398
We have a proprietary SBC (Single Board Computer) type of platform that is a large sized scale Raspberry Pi essentially. It is ARM based and uses Linux Kernel v3.# as its base. This isn't an Ubuntu question but I know Ubuntu does have ARM support in some versions so maybe there is shared knowledge.We use a provided BSP (Board Support Package) to get interfacing to it through UART and can give it very basic Linux commands with the $sh.It's frustrating as I cannot use most of the known bash commands I am used to.Is there anyway to increase functionality by 'adding' bash to different Linux platforms?
Add bash to base Linux Kernel?
bash;shell;embedded
null
_unix.172102
I am porting software system with central java application, talking to php webpage. In php webpage, there is a subsite for configuring network parameters, which in the code reads/writes from/to /etc/network/interfaces file. The older version of this system was based on Raspbian OS, but now we switched to Arch Linux, which has different network setup approach. I am thinking of creating /etc/network/interfaces file in Arch Linux and modyfing it through Arch Linux Network Manager. Does php maybe has some library, created for such purposes?
Raspbian /etc/network/interfaces file not present in ArchLinux
raspbian;networking
null
_webapps.8938
I have a contact with two email address: a personal and a business account. I email him regularly at both depending upon the subject.Occasionally I send an email to his personal account while including other people at his business account, by complete accident.How can I setup Gmail to stop me from emailing his personal account when there are other contacts included that are from his business domain?
How to stop me from mailing the wrong domain from Gmail?
gmail;gmail contacts
null
_webmaster.15229
Been trying for a while to get this script fully working I think I'm nearly there. Ok so I basically need the script to show 3 things, email address - Browser type - Time logged in. EMAILS='/home/user/emaillist.txt'MAIL=$EMAILSwhile read MAIL;dogrep -f $EMAILS /var/log/apache2/ssl_access.log>>/home/user/test.txtdone < $EMAILSI need some way of first using the todays date in here so it only searches the apache logs for that data. I was thinking about using the linux date +[%d/%b/%Y command in the script as this will match the correct date formate. The other aspect is that I only need the output to show the user's email address | Browser type and date from apache nothing else. HELP please UPDATEManaged to play around with the script and got it to this stage where it works fine I just need to work out how to add browser data into the text file?x=$(date +%d/%b/%Y)y=$(date +%d%b%Y)filename=emaillist.txt while read filenamedo if COUNT=$(grep $filename /var/log/apache2/ssl_access.log |grep -c $x) then echo $filename:$COUNT>>/home/user/logs/usage$y.txt fidone < emaillist.txt
Apache log analytic's
apache;looking for a script;server side scripting;filenames;apache log files
If you putgrep $filename /var/log/apache2/ssl_access.log |grep $x | awk -F\ '{print $6}' | sort -u >> /home/user/logs/usage$y.txt before the fi that should put the browser information into the text file (and remove duplicate lines where they are the same) after the line with the count on it.
_cs.33086
That's probably a vague question but allow me to try and give an example:My compiler does transformations on HTML (from HTML to HTML). It scans a flattened DOM tree, and relies on lookbehinds (on elements pushed onto a stack) to decide what transformation to apply. I can give more detail if necessary but I don't want to lose the reader. Can such logic be alternatively implemented with no mutable state? I have become a big believer in functional programming and have made it a point to make my code as functional-style as possible. I don't like loops that perform actions based on the content of a previous iteration, by saving the information from a previous iteration in stacks or booleans. I need to augment the functionality of this routine and will probably tip the code from being just about understandable to only the author will understand this, everyone else don't touch.But I have little background in compiler theory & development so am wondering if the problem domain necessitates mutable state in practice.
Is it possible to write an HTML compiler with no mutable state?
compilers;pushdown automata;functional programming
In theory, any program can be treated as functional by treating every operation as replace the world with a world this piece of state replaced by a different value. In practice ... this actually works pretty well.Your code will look something like:root_node = parse_html()for each operation: root_node = root_node.replace_using_operation(operation)For a leaf or node that you know won't be affected by this operation, replace_using_operation looks like:fn replace_using_operation(self, op): return selfFor the typical non-leaf case, replace_using_operation will look like:fn replace_using_operation(self, op): children = new List<Node>(); # pedantically, this is not functional, but realistically it is. # if your language has generators or a iterator map function you can # make it purely functional, but most people find this easier to read. for each ch in self.children: children.append(ch.replace_using_operation(op)) new_self = new Node(children) delete self return new_selfOf course, depending on what knowledge you have of the operation, you might want to perform different operations on some children. Also note that you're perfectly free to pass additional arguments (including snapshots of other branches of the tree - just remember you don't own them) or to have entirely separate replace_* functions instead of passing operation as an argument.This code is only a very rough skeleton, after all.
_unix.43846
I'm writing a script which creates project archives and then creates 7z archives of them to make it easier on me to save specific versions and keep encrypted backups.After I've generated the archives and I get to the encryption phase, I'd like to encrypt the files with one call to gpg if possible, so as to only have the user input their passphrase once. Otherwise, we'd either have to cache the user's passphrase in memory (which I'd really like not to do) or have them input and confirm their passphrase for every single project that is archived (which is worse). Is there a way to pass multiple filenames to gpg to have it encrypt all of them in one go? If I try this:$ gpg --cipher-algo AES256 --compression-algo BZIP2 -c project1.7z project2.7z...I see the following error in the shell:usage: gpg [options] --symmetric [filename]Is there a way to do what I'm looking to accomplish?
Encrypt multiple files at once
gpg
Is there a way to pass multiple filenames to gpg to have it encrypt all ofthem in one go?No, there is not.You will likely want to pass the passphrase with one of the following gpg options (the latter would be most secure choice):--passphrase--passphrase-file--passphrase-fd
_webapps.11054
Which file types will not upload and open in Microsoft's SkyDrive?
Files not accepted by SkyDrive
onedrive
null
_cs.16664
I'm trying to generate a random but realistic network topology so I can test the performance of some routing algorithms. I came across Waxman's model described in Routing of Multipoint Connections, which seems pretty simple:Distribute $N$ nodes randomly across a plane (uniform in x and y).For each pair of nodes, generate an edge between them with the probability $ P = \beta \exp \frac{-d}{L\alpha}$, where $d$ is the euclidean distance between the nodes, $L$ is the maximum distance between two nodes, and $\alpha$ and $\beta$ are parameters in the range $(0, 1]$.I've implemented my current understanding of Waxman's algorithm as a simple web-based demo, which visualizes a generated topology from chosen parameters $\alpha$, $\beta$, and $N$.However, I want to be able to generate a connected network topology for a specific number of nodes. Since Waxman's algorithm generates edges probabilistically, I usually end up with disconnected nodes. How do I connect the rest of the nodes to the topology in a way consistent with Waxman's algorithm, i.e. simulates a real network topology?There are plenty of ways to finish the topology by connecting the disconnected nodes, but I don't know which one is the most compatible with the already-generated edges. Waxman's paper doesn't seem to mention how disconnected nodes are treated.
How to generate a connected random network topology for a specific number of nodes?
graphs;computer networks
null
_unix.321579
I've given a variable which hold a numberdaysAgo=1I would like to expand this variable in a get date expression. Like this:$(date +%d -d '$daysAgo days ago')What do I need to do that the $daysAgo variable gets expanded?I tried like that without success:daysAgo=1exp='${daysAgo} days ago'$(date +%d -d $exp)
How to insert variable in get date expression?
bash;date;variable
Just use , not '. Double quotes allows expansion of variables within the quotes, single quotes dont. daysAgo=1echo $(date +%d -d $daysAgo day ago)06daysAgo=1exp=$daysAgo days agoecho $(date +%d -d $exp)06
_unix.222319
Conspy is a neat remote control program for the TTY virtual consoles in Linux. I am trying to compile the latest v1.10-1 version, but after installing all the supposedly needed packages, I still have a compilation error that stops the procedure:luis@utilite-desktop:~/Temporal/conspy/conspy-1.10$ make cleantest -z conspy || rm -f conspytest -z *~ || rm -f *~rm -f *.oluis@utilite-desktop:~/Temporal/conspy/conspy-1.10$ makegcc -DPACKAGE_NAME=\conspy.c\ -DPACKAGE_TARNAME=\conspy-c\ -DPACKAGE_VERSION=\1.10\ -DPACKAGE_STRING=\conspy.c\ 1.10\ -DPACKAGE_BUGREPORT=\\ -DPACKAGE_URL=\\ -DPACKAGE=\conspy-c\ -DVERSION=\1.10\ -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_FCNTL_H=1 -DHAVE_GETOPT_H=1 -DHAVE_STDARG_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_SYS_IOCTL_H=1 -DHAVE_SYS_TIME_H=1 -DHAVE_TERMIOS_H=1 -DHAVE_UNISTD_H=1 -DTIME_WITH_SYS_TIME=1 -DRETSIGTYPE=void -DHAVE_SELECT=1 -DHAVE_STRTOL=1 -I. -g -O2 -MT conspy.o -MD -MP -MF .deps/conspy.Tpo -c -o conspy.o conspy.cconspy.c: In function 'process_command_line':conspy.c:352:11: warning: ignoring return value of 'strtol', declared with attribute warn_unused_result [-Wunused-result]mv -f .deps/conspy.Tpo .deps/conspy.Pogcc -g -O2 -o conspy conspy.oconspy.o: In function `cleanup':/home/luis/Temporal/conspy/conspy-1.10/conspy.c:542: undefined reference to `endwin'conspy.o: In function `conspy':/home/luis/Temporal/conspy/conspy-1.10/conspy.c:624: undefined reference to `wmove'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:625: undefined reference to `wclrtoeol'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:658: undefined reference to `wmove'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:659: undefined reference to `waddchnstr'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:660: undefined reference to `wchgat'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:680: undefined reference to `wmove'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:681: undefined reference to `waddchnstr'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:682: undefined reference to `wchgat'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:685: undefined reference to `wmove'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:686: undefined reference to `wrefresh'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:615: undefined reference to `LINES'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:615: undefined reference to `LINES'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:699: undefined reference to `endwin'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:700: undefined reference to `wrefresh'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:552: undefined reference to `LINES'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:552: undefined reference to `stdscr'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:552: undefined reference to `COLS'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:552: undefined reference to `curscr'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:729: undefined reference to `wrefresh'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:617: undefined reference to `stdscr'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:617: undefined reference to `stdscr'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:618: undefined reference to `stdscr'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:618: undefined reference to `stdscr'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:617: undefined reference to `wmove'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:618: undefined reference to `wclrtobot'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:779: undefined reference to `stdscr'conspy.o: In function `setup':/home/luis/Temporal/conspy/conspy-1.10/conspy.c:499: undefined reference to `initscr'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:500: undefined reference to `nonl'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:515: undefined reference to `has_colors'conspy.o: In function `main':/home/luis/Temporal/conspy/conspy-1.10/conspy.c:278: undefined reference to `tigetstr'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:280: undefined reference to `tigetstr'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:280: undefined reference to `putp'conspy.o: In function `setup':/home/luis/Temporal/conspy/conspy-1.10/conspy.c:517: undefined reference to `start_color'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:529: undefined reference to `init_pair'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:519: undefined reference to `acs_map'/home/luis/Temporal/conspy/conspy-1.10/conspy.c:519: undefined reference to `COLOR_PAIRS'collect2: ld returned 1 exit statusmake: *** [conspy] Error 1The compilation yields similar errors on:Ubuntu 14.04 LTS on PC (portable computer from ASUS).Ubuntu 12.04 LTS on Utilite from Compulab (an embedded device like RaspBerry).Why is the building failing with that undefined reference error and how could it be solved?
Conspy: Undefined reference errors when trying to compile the latest version
compiling
For those arriving here, these are the needed packages for ConSpy:# apt-get install libtool libncurses5-dev fakeroot sudo automake devscriptsThe problem (or so I believe): as @SteelDriver pointed, between each make attempt I was not doing the needed ./configure.UPDATE 2015-10-16:There is no need to do /configure since v1.13 and later. In fact, there is no such script on the sources no more. It seems to be included on the compilation script.
_unix.242520
I have Linux Mint 17 and intended to upgrade to 17.2 using the upgrade wizard. The Upgrade to Linux Mint 17.2 Rafaela option appears in the edit menu of the Update Manager, but when I click it nothing happens.I assume there is some error occurring preventing the wizard from starting. Is there a way to see logs from the Update Manager? Has anyone else had this problem?
Linux mint 17.2 rafaela upgrade wizard not appearing
linux mint;upgrade
null
_softwareengineering.120193
In one of our projects, we've built a Publisher - Subscriber Engine on Oracle Service Bus. The functionality being a series of events are published and subscribers (JMS queues) receive these whenever a new event is published.We are facing some technical issues now, performance-wise and hence an architectural review is underway. Now for my questions:Architecturally the ESB has to publish events into a DB and read from the DB which users wish to be notified, then push the event onto their respective queues.There is a high amount of DB interaction and the question is whether ESB should be having such high amount of interaction with the DB in the first place? Or should there have been some alternate component responsible for doing this. Alternately is there any non-DB approach in which we can store the events and subscribers? Where else can this application data be held within the ESB context?
Use of Service Bus in a Pub-Sub Engine
architecture;service bus
null
_unix.263842
Am using Fedora and tried to install L7-filter on linux-2.6.26 kernel.One of the steps is to run {# make menuconfig} but am getting this errorMakefile:434: *** mixed implicit and normal rules. Stop.
Makefile:434: *** mixed implicit and normal rules. Stop
command line;fedora;make
null
_cseducators.791
I'm looking for examples to give my students when lecturing about stacks, for use-cases of the stack in programming and life.So far I've been thinking of a pole of rings (when you can insert or remove rings from the top).My students are high school students I'm teaching next year for their matriculation exam. They have only some basic knowledge in procedural programming and the tip of the iceberg in Object-Oriented stuff.Any ideas for more analogies or examples?
What are some good examples of using a stack data structure?
teaching analogy;data structure
I once had to implement a limited undo function (undo changes to the current field, or addition/deletion of records).The lists of undo deltas (one for field changes and one for records) were stored as stacks.
_unix.338423
Why does grep output lines that seemingly don't match the expression?As mentioned in my comment this behaviour may be caused by a bug.I am aware different locales affect character order but I thought the -o output below confirms this is not a problem here but I was wrong. Adding LC_ALL=C gives expected output.I had this question after I saw locales affected the output.[aa@bb grep-test]$ cat input.txtaa bbCC ccdd ee[aa@bb grep-test]$ LC_ALL=C grep -o [A-Z] input.txtCC[aa@bb grep-test]$ grep -o [A-Z] input.txtCC[aa@bb grep-test]$ LC_ALL=C grep [A-Z] input.txtCC cc[aa@bb grep-test]$ grep [A-Z] input.txtaa bbCC ccdd ee[aa@bb grep-test]$[aa@bb tmp]$ cat testaa bbCC ccdd ee[aa@bb tmp]$ grep [A-Z] testaa bbCC ccdd ee[aa@bb tmp]$ grep -o [A-Z] testCC[aa@bb tmp]$ grep -E [A-Z] testaa bbCC ccdd ee[aa@bb tmp]$ grep -n [A-Z] test1:aa bb2:CC cc3:dd ee[aa@bb tmp]$ echo [A-Z][A-Z][aa@bb tmp]$ grep -VGNU grep 2.6.3...[aa@bb tmp]$ bash --versionGNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)...[aa@bb grep-test]$ command -v grep/bin/grep[aa@bb grep-test]$ rpm -q -f $(command -v grep)grep-2.6.3-6.el6.x86_64[aa@bb grep-test]$ echo grep [A-Z] input.txt | xxd0000000: 6772 6570 205b 412d 5a5d 2069 6e70 7574 grep [A-Z] input0000010: 2e74 7874 0a .txt. [aa@bb grep-test]$ cmd='grep [A-Z] input.txt'; echo $cmd | xxd; eval $cmd0000000: 6772 6570 205b 412d 5a5d 2069 6e70 7574 grep [A-Z] input0000010: 2e74 7874 0a .txt.aa bbCC ccdd ee[aa@bb grep-test]$ xxd input.txt0000000: 6161 2062 620a 4343 2063 630a 6464 2065 aa bb.CC cc.dd e0000010: 650a 0a e..[aa@bb grep-test]$
Why does grep output lines that seemingly don't match the expression?
grep
This looks like your locale collation rules being very ... helpful.Try it withLC_ALL=C grep [A-Z] input.txtto test that idea.I have export LANG=en_US.UTF-8export LC_COLLATE=Cexport LC_NUMERIC=Cin my shell startup to avoid this kind of trouble while still getting my unicode goodness.
_softwareengineering.189852
I know that N-Tier intended to separate layers on different networkbut I would like to have the same code separation in codeigniterI got this idea to haveModel : for database CRUD - > Data layerREST API : for business logic> Business layer that calls CRUD method from modelsController & View are going as beforeand flow will beView<>Controller <> API(with Business classes) <>ModelI wanna know what kind of drawbacks it can be such architecture
How to achieve N-Tier type in Codeigniter MVC
php;mvc;rest;n tier;codeigniter
null
_unix.166018
Input file (contain column 1 with repeat value in number and second column may contain voice,email, tel, voice mail):123,voice123, tel324,voice mail345,email123,emailOutput file with headers and required Y if they have any value in column two else Nnumber,voice,voice mail,tel,email123,Y,N,Y,N,Y324,N,Y,N,N,N345,N,N,N,N,Y
Need to use grep and sort for matching records and display Y and N
text processing;sort
null
_codereview.62801
I wrote some Lua bindings for FTGL's C API. This works well enough, but I ended up with lots of macros, one for each Lua function signature. For example, LUD_NUMBER_NUMBER_TO_NUMBER creates a Lua function that takes a lightuserdata argument followed by two numeric arguments and returns a number.I like having most of the functions created with macros, because when reading the code, you can tell those functions are simply passing the same values through to the C API, and the functions that are actually written out have more interesting stuff going on.I'm wondering if there's a way to reduce the redundancy in the macros themselves, though, or a better way to handle this. I thought about having one macro for each number of parameters, and passing in the parameter types (number, userdata, etc.), but you'd have to pass each type name once in lower case and once in upper case, making it difficult to read.What can I do to clean this up?#include <stdlib.h>#include <lua.h>#include <lualib.h>#include <lauxlib.h>#include <FTGL/ftgl.h>#define ustring const unsigned char */* work around typo in FTGL */#ifndef ftglGetLayoutAlignment #define ftglGetLayoutAlignment ftglGetLayoutAlignement#endif#define LUD_LUD_TO_NIL(name) \int lf_ ## name (lua_State *L) { \ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); \ luaL_checktype(L, 2, LUA_TLIGHTUSERDATA); \ name(lua_touserdata(L, 1), lua_touserdata(L, 2)); \ return 0; \}#define LUD_NUMBER_NUMBER_TO_NIL(name) \int lf_ ## name (lua_State *L) { \ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); \ luaL_checktype(L, 2, LUA_TNUMBER); \ luaL_checktype(L, 3, LUA_TNUMBER); \ name(lua_touserdata(L, 1), lua_tonumber(L, 2), lua_tonumber(L, 3)); \ return 0; \}#define LUD_NUMBER_NUMBER_TO_NUMBER(name) \int lf_ ## name (lua_State *L) { \ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); \ luaL_checktype(L, 2, LUA_TNUMBER); \ luaL_checktype(L, 3, LUA_TNUMBER); \ lua_pushnumber(L, name(lua_touserdata(L, 1), lua_tonumber(L, 2), \ lua_tonumber(L, 3))); \ return 1; \}#define LUD_NUMBER_TO_NIL(name) \int lf_ ## name (lua_State *L) { \ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); \ luaL_checktype(L, 2, LUA_TNUMBER); \ name(lua_touserdata(L, 1), lua_tonumber(L, 2)); \ return 0; \}#define LUD_NUMBER_TO_NUMBER(name) \int lf_ ## name (lua_State *L) { \ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); \ luaL_checktype(L, 2, LUA_TNUMBER); \ lua_pushnumber(L, name(lua_touserdata(L, 1), lua_tonumber(L, 2))); \ return 1; \}#define LUD_STRING_NUMBER_TO_NIL(name) \int lf_ ## name (lua_State *L) { \ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); \ luaL_checktype(L, 2, LUA_TSTRING); \ luaL_checktype(L, 3, LUA_TNUMBER); \ name(lua_touserdata(L, 1), lua_tostring(L, 2), lua_tonumber(L, 3)); \ return 0; \}#define LUD_STRING_TO_NUMBER(name) \int lf_ ## name (lua_State *L) { \ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); \ luaL_checktype(L, 2, LUA_TSTRING); \ lua_pushnumber(L, name(lua_touserdata(L, 1), lua_tostring(L, 2))); \ return 1; \}#define LUD_TO_LUD(name) \int lf_ ## name (lua_State *L) { \ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); \ lua_pushlightuserdata(L, name(lua_touserdata(L, 1))); \ return 1; \}#define LUD_TO_NIL(name) \int lf_ ## name (lua_State *L) { \ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); \ name(lua_touserdata(L, 1)); \ return 0; \}#define LUD_TO_NUMBER(name) \int lf_ ## name (lua_State *L) { \ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); \ lua_pushnumber(L, name(lua_touserdata(L, 1))); \ return 1; \}#define NIL_TO_LUD(name) \int lf_ ## name (lua_State *L) { \ lua_pushlightuserdata(L, name()); \ return 1; \}#define STRING_TO_LUD(name) \int lf_ ## name (lua_State *L) { \ luaL_checktype(L, 1, LUA_TSTRING); \ const char *a1 = lua_tostring(L, 1); \ lua_pushlightuserdata(L, name(a1)); \ return 1; \}#define PREFIXED_CONST(name, prefix) \ lua_pushstring(L, #name); \ lua_pushnumber(L, prefix ## _ ## name); \ lua_rawset(L, -3);#define FT_CONST(name) PREFIXED_CONST(name, FT)#define FTGL_CONST(name) PREFIXED_CONST(name, FTGL)LUD_TO_NIL(ftglDestroyFont)LUD_STRING_TO_NUMBER(ftglAttachFile)/*int ftglAttachData(FTGLfont* font, const unsigned char * data, size_t size);*/int lf_ftglAttachData(lua_State *L){ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); luaL_checktype(L, 2, LUA_TSTRING); size_t size; ustring data = (ustring)lua_tolstring(L, 2, &size); lua_pushnumber(L, ftglAttachData( lua_touserdata(L, 1), data, size)); return 1;}LUD_NUMBER_TO_NUMBER(ftglSetFontCharMap)LUD_TO_NUMBER(ftglGetFontCharMapCount)/*FT_Encoding* ftglGetFontCharMapList(FTGLfont* font)*/int lf_ftglGetFontCharMapList(lua_State *L){ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); FTGLfont *font; FT_Encoding *charMapList; font = lua_touserdata(L, 1); unsigned int charMapCount = ftglGetFontCharMapCount(font); charMapList = ftglGetFontCharMapList(font); lua_newtable(L); for (int i = 0; i < charMapCount; i++) { lua_pushnumber(L, charMapList[i]); lua_rawseti(L, -2, i + 1); } return 1;}LUD_NUMBER_NUMBER_TO_NUMBER(ftglSetFontFaceSize)LUD_TO_NUMBER(ftglGetFontFaceSize)LUD_NUMBER_TO_NIL(ftglSetFontDepth)LUD_NUMBER_NUMBER_TO_NIL(ftglSetFontOutset)LUD_NUMBER_TO_NIL(ftglSetFontDisplayList)LUD_TO_NUMBER(ftglGetFontAscender)LUD_TO_NUMBER(ftglGetFontDescender)LUD_TO_NUMBER(ftglGetFontLineHeight)/*void ftglGetFontBBox(FTGLfont* font, const char *string, int len, float bounds[6]);*/int lf_ftglGetFontBBox(lua_State *L){ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); luaL_checktype(L, 2, LUA_TSTRING); size_t len; const char *a2 = lua_tolstring(L, 2, &len); float bounds[6]; ftglGetFontBBox(lua_touserdata(L, 1), a2, len, bounds); lua_newtable(L); for (int i = 0; i < 6; i++) { lua_pushnumber(L, bounds[i]); lua_rawseti(L, -2, i + 1); } return 1;}LUD_STRING_TO_NUMBER(ftglGetFontAdvance)LUD_STRING_NUMBER_TO_NIL(ftglRenderFont)LUD_TO_NUMBER(ftglGetFontError)STRING_TO_LUD(ftglCreateBitmapFont)STRING_TO_LUD(ftglCreateBufferFont)STRING_TO_LUD(ftglCreateExtrudeFont)STRING_TO_LUD(ftglCreateOutlineFont)STRING_TO_LUD(ftglCreatePixmapFont)STRING_TO_LUD(ftglCreatePolygonFont)STRING_TO_LUD(ftglCreateTextureFont)LUD_TO_NIL(ftglDestroyLayout)/* void ftglGetLayoutBBox(FTGLlayout *layout, const char* string, float bounds[6]);*/int lf_ftglGetLayoutBBox(lua_State *L){ luaL_checktype(L, 1, LUA_TLIGHTUSERDATA); luaL_checktype(L, 2, LUA_TSTRING); FTGLlayout *a1 = lua_touserdata(L, 1); const char *a2 = lua_tostring(L, 2); float bounds[6]; ftglGetLayoutBBox(a1, a2, bounds); lua_newtable(L); for (int i = 0; i < 6; i++) { lua_pushnumber(L, bounds[i]); lua_rawseti(L, -2, i + 1); } return 1;}LUD_STRING_NUMBER_TO_NIL(ftglRenderLayout)LUD_TO_NUMBER(ftglGetLayoutError)NIL_TO_LUD(ftglCreateSimpleLayout)LUD_LUD_TO_NIL(ftglSetLayoutFont)LUD_TO_LUD(ftglGetLayoutFont)LUD_NUMBER_TO_NIL(ftglSetLayoutLineLength)LUD_TO_NUMBER(ftglGetLayoutLineLength)LUD_NUMBER_TO_NIL(ftglSetLayoutAlignment)LUD_TO_NUMBER(ftglGetLayoutAlignment)LUD_NUMBER_TO_NIL(ftglSetLayoutLineSpacing)/* http://sourceforge.net/p/ftgl/bugs/35/LUD_TO_NUMBER(ftglGetLayoutLineSpacing) */int luaopen_luaftgl(lua_State *L){ const luaL_Reg api[] = { /* FTFont functions (FTGL/FTFont.h) */ { destroyFont, lf_ftglDestroyFont }, { attachFile, lf_ftglAttachFile }, { attachData, lf_ftglAttachData }, { setFontCharMap, lf_ftglSetFontCharMap }, { getFontCharMapCount, lf_ftglGetFontCharMapCount }, { getFontCharMapList, lf_ftglGetFontCharMapList }, { setFontFaceSize, lf_ftglSetFontFaceSize }, { getFontFaceSize, lf_ftglGetFontFaceSize }, { setFontDepth, lf_ftglSetFontDepth }, { setFontOutset, lf_ftglSetFontOutset }, { setFontDisplayList, lf_ftglSetFontDisplayList }, { getFontAscender, lf_ftglGetFontAscender }, { getFontDescender, lf_ftglGetFontDescender }, { getFontLineHeight, lf_ftglGetFontLineHeight }, { getFontBoundingBox, lf_ftglGetFontBBox }, /* added */ { getFontBBox, lf_ftglGetFontBBox }, /* original */ { getFontAdvance, lf_ftglGetFontAdvance }, { renderFont, lf_ftglRenderFont }, { getFontError, lf_ftglGetFontError }, /* FTGL*Font functions (FTGL/FTGL*Font.h) */ { createBitmapFont, lf_ftglCreateBitmapFont }, { createBufferFont, lf_ftglCreateBufferFont }, { createExtrudeFont, lf_ftglCreateExtrudeFont }, { createOutlineFont, lf_ftglCreateOutlineFont }, { createPixmapFont, lf_ftglCreatePixmapFont }, { createPolygonFont, lf_ftglCreatePolygonFont }, { createTextureFont, lf_ftglCreateTextureFont }, /* FTLayout functions (FTGL/FTLayout.h) */ { destroyLayout, lf_ftglDestroyLayout }, { getLayoutBoundingBox, lf_ftglGetLayoutBBox }, /* added */ { getLayoutBBox, lf_ftglGetLayoutBBox }, /* original */ { renderLayout, lf_ftglRenderLayout }, { getLayoutError, lf_ftglGetLayoutError }, /* FTSimpleLayout functions (FTGL/FTSimpleLayout.h) */ { createSimpleLayout, lf_ftglCreateSimpleLayout }, { setLayoutFont, lf_ftglSetLayoutFont }, { getLayoutFont, lf_ftglGetLayoutFont }, { setLayoutLineLength, lf_ftglSetLayoutLineLength }, { getLayoutLineLength, lf_ftglGetLayoutLineLength }, { setLayoutAlignment, lf_ftglSetLayoutAlignment }, { getLayoutAlignment, lf_ftglGetLayoutAlignment }, /* added */ { getLayoutAlignement, lf_ftglGetLayoutAlignment }, /* original */ { setLayoutLineSpacing, lf_ftglSetLayoutLineSpacing }, /* http://sourceforge.net/p/ftgl/bugs/35/ { getLayoutLineSpacing, lf_ftglGetLayoutLineSpacing }, */ { NULL, NULL } };#if LUA_VERSION_NUM == 501 luaL_register (L, luaftgl, api);#else luaL_newlib (L, api);#endif /* FT_Encoding (freetype.h) */ FT_CONST(ENCODING_NONE) FT_CONST(ENCODING_MS_SYMBOL) FT_CONST(ENCODING_UNICODE) FT_CONST(ENCODING_SJIS) FT_CONST(ENCODING_GB2312) FT_CONST(ENCODING_BIG5) FT_CONST(ENCODING_WANSUNG) FT_CONST(ENCODING_JOHAB) FT_CONST(ENCODING_MS_SJIS) FT_CONST(ENCODING_MS_GB2312) FT_CONST(ENCODING_MS_BIG5) FT_CONST(ENCODING_MS_WANSUNG) FT_CONST(ENCODING_MS_JOHAB) FT_CONST(ENCODING_ADOBE_STANDARD) FT_CONST(ENCODING_ADOBE_EXPERT) FT_CONST(ENCODING_ADOBE_CUSTOM) FT_CONST(ENCODING_ADOBE_LATIN_1) FT_CONST(ENCODING_OLD_LATIN_2) FT_CONST(ENCODING_APPLE_ROMAN) /* FTGL constants */ FTGL_CONST(RENDER_FRONT) FTGL_CONST(RENDER_BACK) FTGL_CONST(RENDER_SIDE) FTGL_CONST(RENDER_ALL) FTGL_CONST(ALIGN_LEFT) FTGL_CONST(ALIGN_CENTER) FTGL_CONST(ALIGN_RIGHT) FTGL_CONST(ALIGN_JUSTIFY) return 1;}
Lua bindings for FTGL (FreeType font rendering in OpenGL)
c;lua;opengl;macros;freetype
null
_cstheory.32568
According to the answers in posting it is possible that $\mathsf{VP} = \mathsf{VNP}$ and $\mathsf{P} \neq \mathsf{NP}$ are simultaneously correct. $\mathsf{VP} = \mathsf{VNP}$ implies $\mathsf{P/poly} = \mathsf{PH/poly}$ (assuming the Generalized Riemann Hypothesis).This means that $\mathsf{VP} = \mathsf{VNP}$ would amplify the power of polynomial circuits even if $\mathsf{P} \neq \mathsf{NP}$. The best known obstruction for $\mathsf{P} \neq \mathsf{BPP}$ comes from circuit related issue for problems in $\mathsf{E}$. However if circuits have more power which is the scenario posed by $\mathsf{VP}=\mathsf{VNP}$, may be the obstruction for $\mathsf{P} \neq \mathsf{BPP}$ is not legitimate anymore.How does $\mathsf{VP} = \mathsf{VNP}$ amplify the power of randomness?If $\mathsf{VP} = \mathsf{VNP}$ were truth then is $\mathsf{P}\neq \mathsf{BPP}=\mathsf{NP}$ most likely scenario? Are other possibilities such as $$(1)\mbox{ }\mathsf{P}=\mathsf{BPP}\neq \mathsf{NP}\quad \quad(2)\mbox{ }\mathsf{P}=\mathsf{BPP}=\mathsf{NP}\quad\quad(3)\mbox{ }\mathsf{P}\neq\mathsf{BPP}\neq\mathsf{NP}$$ least likely?Are there any reasons to believe or not believe so?
Consequences of VP = VNP on randomness
circuit complexity;counting complexity;derandomization;p vs np;hierarchy theorems
null
_webapps.13727
I have a Gmail tab open in the browser (Firefox 4.0) all the time with chat enabled.However, when I move to a different tab for a while, then move back to the Gmail tab my chat status has changed to idle (orange). Is there a way to prevent it moving to idle while when the tab is not active?Update: Happens on Chrome too.
How can I stop the Gmail web based chat going idle when I switch browser tabs?
gmail;google talk
null
_unix.115481
I am getting current date and checking for length of date if non-zero then perform some action.now=$(date)echo $nowif [[ -n $now ]]; thenecho not emptyfiPrints this to console- Mon Feb 17 01:51:38 CST 2014Unrecognized Type: Monnot emptyif [[ -n $now ]]; then -> this line is causing shell to throw Unrecognized Type: warning, Is there something wrong with the check -n ?
Unrecognized Type: for length of date string
bash;shell;osx
null
_softwareengineering.63433
The Einstellung Effect refers to a person's predisposition to solve a given problem in a specific manner even though there are better or more appropriate methods of solving the problem. As a programmer with a decent amount of experience, how can one combat this tendency to always approach problem solving from tried and true paths from past experience? To give two very concrete examples I have been building web applications for a long time, long enough to predate wide use of Javascript frameworks (e.g. jQuery) and better web application frameworks (e.g. ASP.NET MVC). If I have client work where I am under a time crunch or pressing issues from the problem domain or business rules, I tend to just use what I know to try to achieve a solution. This involves very ugly things like document.getElementById or using ASP.NET with template bound controls (DataList/Repeater) rather than figuring out how to rearchitect things with an ASP.NET MVC approach. One technique I've used in the past is to have personal projects which exist simply for exploring these new technologies but this is difficult to sustain. What other approaches could be recommended?
Combating the Einstellung Effect
learning
This is a great question. And I think it isn't just senior programmers that run into this — addressing it early can be a great way for a learner to accelerate their skill development.There are two sides to this issue - one that is bad and one that is actually good.Bad - Picking the wrong solutionHere's an example — as an inexperienced developer, you may have only really solved two problems before, problems A and B. At this point, you know there are problems you don't know, but given the lens of your own experience, a lot of what you see looks like it might be A or B.Along comes a new problem. To you, this new problem looks like problem A, so you solve it the way you usually solve A. Something doesn't feel right, and it takes longer, and as you work you end up realizing this is a new problem, C. It's a variation of A you didn't know existed.So what do you do to not make this mistake again? Two things:Figure out what was different about this new problem. Figure out what approaches may have worked differently and why.Catalog this problem away and move on to solving more new problems.This should help you naturally solve this problem. By the time you have 10 years of experience, you are familiar with problems A through Z and your repertoire of solutions is extensive.Good - EfficiencyIn the real world, with deadlines and limited resources, using what you know isn't always bad:At the onset of the problem-solving process, you compare the new problem to all problems you know. You'll attempt to recognize the signs and decide which problem set this looks like.If a 100% match can't be made, an experienced developer will weigh the risk of spending more time in discovery against the risks of a possibly flawed execution. If the risk of wasted time is too high, then you just go ahead with what you know.That isn't a bad thing - it's using risk analysis to choose efficiency over 100% accuracy. It's done every day and we'd all be tied up in things that aren't getting us anywhere if we didn't do it.So, to answer your question:As a programmer with a decent amount of experience, how can one combat this tendency to always approach problem solving from tried and true paths from past experience? Keep looking for and cataloging new problemsGet better at selecting the right solution for the problem; instead of just knowing which solution, know why it's right.Practice and hone your decision-making skills. Sometimes efficiency is the right choice, and getting better at recognizing those times will lead to measurable real-world advantages.
_unix.180399
so I have a line of code like this:result=`find . -type f -size -1000c -print0 | xargs -0 ls -Sh | head`for i in $result; do item=`wc -c $i` echo $item1 donethis will print out all the files in the current fold that are at most 1000bytes, it has the format like:size_of the file ./name_of_the_filebut i want to get rid of the ./ symbol, so i try to use cuti want to do something like:for i in $result; do item=`wc -c $i` item1=`cut -f 1 $item` // this gives me the size item2=`cut -c 7- $item` // this gives me all the character after ./ echo item1, item2 // now make it print donebut i'm getting error like:cut: 639: No such file or directorycan anyone please give me a hint on this? I appreciate it.
bash script, list all the files over a specific size
bash;shell script
null
_softwareengineering.315507
I haven't really found a decent/future-proof way to version methods in my WebAPI. This is what I typically do now, but it can get confusing and hard to trace if it gets a bit large (I'll end up [OBSOLETE]-ing the older ones as time goes on). Has anyone come up with a more elegant or manageable option to versioning WebAPI in C#/NET.public class CoreController : ApiController { [HttpGet][Route(~/services/core/test/v{_version})] public IHttpActionResult returnTest(int _version) { try { switch (_version) { case 2: return Ok(returnTest_v2()); case 1: default: return Ok(returnTest_v1()); } } catch { return NotFound(); } } public string returnTest_v1() { return SAMPLE_V1; } public string returnTest_v2() { return SAMPLE_V2; }}
REST API Versioning in C# WebApi
c#
Actually, since you're already using custom routes this is a pretty simple fix. It has to do with the route structure, which right now isn't perfect, but we can fix it.from the sample code, i gather that your api follows this pattern:{category}/{resource}/{id}/v{ersion}Which is fine, the problem is its creating those branches in your code which i can't imagine are very fun to maintain or add, though I'll have to admit it'd be convenient to have the previous version's code right there.Anyway, consider moving around a few things to do this:v{ersion}/{category}/{resource}/{id}and separate the methods out into each route on its own. So the example method's signature would become:[HttpGet][Route(~/v1/services/core/test/)]public IHttpActionResult returnTestv1() { //Remember each method has to be uniquely named in ASP.NET[HttpGet][Route(~/v2/services/core/test/)]public IHttpActionResult returnTestv2() {You'll note i also didn't use them as a variable, to 'bake in' their identity. So, while this doubles (triples, quadruples...) the number of methods for each endpoint, there's a few upsides:1) Compatibility - Let's say you make this a public API. I write something for v1, i can keep using v1 until you take it down (my company, for example, maintains v4 and v5, and retired v3 and earlier. Blizzard still supports their original v1 of the Warcraft API and is now on v3 or something). This also means you can clearly modify the functionality of certain endpoints between versions and not confuse existing client programs. I know in an SIS API i programmed against once, the function of GET /student/{id} changed drastically between v1 and v2.2) Maintainence - With this new structure, you can't confuse where a bug is occurring. Better, if you decide to only add a change to output or some logic on the way in, you can still call the old method and keep things behaving consistently.
_softwareengineering.347295
Say for example I have some unused code that I want to use in the future and does not work with the rest program,or say that I come across some interesting code online. Is there any good program that I can store different pieces of code in?
Where to archive code
coding style;source code
If it's for a program which is already under source control, just keep these changes in a branch of that repo.For odd bits of code, you could use almost anything, but some good options would be private GitHub or Bitbucket repo, a Gist, or some private source-control system you maintain on your own hardware. Non-source control options might include dumping things into an S3 bucket or similar cloud storage system, esp. with your usage likely to not exceed the free pricing tier.
_unix.347184
After adding new ssh key to .ssh/authorized_hosts I can no longer ssh to the machine without entering password.What is even more funny is that the .ssh directory is suddenly inaccessible when I'm logged in via ssh (no direct console access):pi@prodpi ~ $ ls -ladrw------- 2 pi pi 4096 Mar 13 2015 .sshpi@prodpi ~ $ cd .ssh/-bash: cd: .ssh/: Permission deniedpi@prodpi ~ $ ls .ssh/ls: cannot access .ssh/authorized_keys: Permission deniedls: cannot access .ssh/known_hosts: Permission deniedauthorized_keys known_hostspi@prodpi ~ $ sudo ls .ssh/authorized_keys known_hostsThe user is pi. What- if not directory permissions- could prevent me from accessing the folder as owner and potentially screw ssh login?
Cannot cd to .ssh
ssh;cd command;permission denied
To enter a directory you have to set executable permission on it.This should do it:chmod u+x .ssh/
_unix.236291
So here's what I want to do: User inputs a USERNAME. Based on this username, I need to get the list of processes started by this user. I am planning to do this by getting the UID of this user and listing all the processes with this UID. I only found UID in the /proc/$PID/status file. I am unclear about how do I proceed with this.
How to get UID and PID
linux;process;proc
To get the UID from the username, use id -u:$ id -u root0$ id -u lightdm112$ id -u nobody 65534But you are re-inventing the wheel. pgrep already handles this just fine:$ pgrep -u www-data1909191019111912$ id -u www-data 33$ pgrep -u 33 1909191019111912You can also use plain ps:$ ps -U www-data -o uid,pid UID PID 33 1909 33 1910 33 1911 33 1912
_cs.28047
In the Bible, a census is taken of the 12 tribes of Israel:Simeon: 59,300Levi: 22,000Judah: 74,600Issachar: 54,400Joseph: 72,700Benjamin: 35,400Reuben: 46,500Gad: 45,650Asher: 41,500Zebulun: 57,400Dan: 62,700Naphtali: 53,400In the book of Deuteronomy 27: 12-13, the tribes are partitioned into two groups, the first group consisting of Simeon, Levi, Judah, Issachar, Joseph, Benjamin, and the second group consisting of Reuben, Gad, Asher, Zebulun, Dan, Naphtali.Someone mentioned to me that this particular partition is the optimal in the sense that it minimizes the absolute value of the total number of people in the first group minus the total number of people in the second group. This partition problem is an NP-hard problem. http://en.wikipedia.org/wiki/Partition_problemIs this calculation correct?
Does the Bible solve an NP-hard problem?
np complete
The partition provided in that passage is not optimal:(Simeon + Levi + Judah + Issachar + Joseph + Benjamin) - (Reuben + Gad + Asher + Zebulun + Dan + Naphtali) = 318400 - 307150 = 11250(Asher + Benjamin + Joseph + Reuben + Simeon + Zebulun) - (Dan + Gad + Issachar + Judah + Levi + Naphtali) = 312800 - 312750 = 50
_unix.367443
I've been searching for one day and I found nothing solid.All the instructions I've been reading on how to install yum, assume I have access to folder /etc/ so that I can set the file /etc/yum.conf. Nonetheless I'm on a shared server and I just have access to my home ~ folder.Is there a way stil to install yum?I cloned itgit clone git://yum.baseurl.org/yum.gitand I triedmake install DESTDIR=/home4/jfolpf/mydirwith no successeverytime I try to run it, I getCRITICAL:yum.cli:Config Error: Error accessing file for config file:///etc/yum.confThank you in advance
How can I build and install yum such that it runs exclusively out of a user-owned directory?
yum;make
null
_webmaster.39164
How can I remove the old ip address from my connection script?Just now my site show:Error establishing a database connection.I am using wordpress on mixedsoft.com (remote sql issues)
How to remove an old ip address from my connection script?
ip address;sql
null
_unix.369569
Recently a tar.gz archive broke my script. Steps to reproduce:# this is a Python package distributed through PyPiwget https://pypi.python.org/packages/b3/e8/0a829f58ff6068f94edf74877f2e093aae945482c96ade683ef3cafdfcad/EasyExtend-3.0.2-py2.5.tar.gz# tar exist status is 0 (i.e. not a broken archive)tar -zxvf EasyExtend-3.0.2-py2.5.tar.gzls -l EasyExtend-3.0.2-py2.5Result:ls: cannot access 'EasyExtend-3.0.2-py2.5/scripts': Permission deniedls: cannot access 'EasyExtend-3.0.2-py2.5/setup.py': Permission deniedls: cannot access 'EasyExtend-3.0.2-py2.5/LICENSE.txt': Permission deniedls: cannot access 'EasyExtend-3.0.2-py2.5/PKG-INFO': Permission deniedls: cannot access 'EasyExtend-3.0.2-py2.5/EasyExtend': Permission deniedls: cannot access 'EasyExtend-3.0.2-py2.5/README.txt': Permission deniedtotal 0d????????? ? ? ? ? ? EasyExtend-????????? ? ? ? ? ? LICENSE.txt-????????? ? ? ? ? ? PKG-INFO-????????? ? ? ? ? ? README.txtd????????? ? ? ? ? ? scripts-????????? ? ? ? ? ? setup.pyThough everything was done under non-superuser account, umask was not applied to newly extracted files.Question: is it a bug, feature or an invalid archive?Question2: is there an elegant way to force default permissions on such files?UPD: my umask is 0002. sudo ls -l gives the right permissions:sudo ls -l EasyExtend-3.0.2-py2.5total 28drw-rw-r-- 7 username username 4096 Sep 19 2009 EasyExtend-rw-rw-r-- 1 username username 1559 May 16 2006 LICENSE.txt-rw-rw-r-- 1 username username 342 Sep 19 2009 PKG-INFO-rw-rw-r-- 1 username username 585 Aug 13 2008 README.txtdrw-rw-r-- 2 username username 4096 Sep 19 2009 scripts-rw-rw-r-- 1 username username 5296 Aug 15 2008 setup.py
tar extracts with invalid permissions - is it an intended behavior?
permissions;tar;umask
These are perfectly valid permissions, they just don't include letting you read them :). In terms of tar, surely it's a feature? But the archive sounds messed up.Re umask, I would assume the explanation here is correct; umask is purely a subtractive thing (or a bitmask, if you're a programmer). It doesn't make sense to say umask is not applied on the basis that there are missing permission bits.To grant executable permission for all directories, you can conveniently use chmod -R a+X EasyExtend-3.0.2-py2.5