Tag Archives: open source

Now How Do I Use This Old Scanner Again? NAPS2!

I have an old beast for a printer/scanner. It is nine years old in human years, which is like one hundred in technology years, but it gets the job done.

One problem I’ve run into repeatedly over the years is that of scanning software. At some point in the distant past the software that came with the scanner disappeared. Yes, one can still download the drivers off of the manufacturers site – but I’m talking about the software that makes the scanning process easier and more robust. Usually this software is from a third party company and thus the manufacturer’s site doesn’t include it as a download. So what is one to do?

Not Another PDF Scanner 2 ScreenshotYou’d think there must be tons of free software options out there for such a simple and fundamental application – you’d be surprised (at least I was). Over the years I’ve used numerous different applications to scan – some commercial trials (FileCenter being my preferred one, but way too expensive for an occasional scan) and lots of crappy free programs.

Well, no more. There is now an excellent, free, and open source option available called NAPS2 (Not Another PDF Scanner 2).

What makes it so great? I’m glad you asked!

  • File Format Support – It can create PDF, TIFF, JPEG, PNG, and other file types (I find the PDF support especially useful for multi-page documents).
  • Automatic Document Feeder / Duplex Support – ADF means that it can handle multiple pages without requiring user intervention and duplex means it can handle double-sided documents also without user intervention.
  • Simple Scan Management – Rotate pages, straighten images, crop, etc.
  • Optical Character Recognition (OCR) – Supports identifying the text in scanned documents.
  • Powerful – Need to automate your scanning using a command-line interface? How about distribute it via Group Policy? No problem.

Why the Healthcare.gov fiasco SHOULD teach us to Open Source Government Application Development.

We Spent How Much?!

According to The Daily Beast the United States Government has spent $118 million to build Healthcare.gov and another $56 million in fixing it…and based on the fact that the site isn’t expected to be fully patched for some time yet I wouldn’t be surprised if the total cost in “fixing” exceeds that of building the system in the first place.

Image courtesy of OpenClipart.org and Iwan Gabovitch
Image courtesy of OpenClipart.org and Iwan Gabovitch

I’m not going to take a position on the Affordable Care Act (ACA) – I try to avoid speaking publicly on controversial issues…but I would like to suggest a lesson we can learn from the ACA that I don’t think will be (very) controversial across party lines – that the Government should utilize open source in the development of applications as a standard rule.

Now, I’m not particularly interested in arguing that every government project should be open source – I’ll be happy if 95-99% of them are. I understand that some people rightly or wrongly believe that using open source in sensitive areas could cause security risks. I’ll let Kevin Clough and perhaps Richard Stallman[1] argue that point.[2] But for the vast majority of projects (Healthcare.gov for example) I can see no reason why the development should not be open source and believe there would be significant advantages to such a course of action.

Lets take a look at the specific ways in which open source development could have reduced or eliminated the issues involved in the Healthcare.gov launch:

Transparency

The government (not just one department, but its entirety – e.g. the white house and congress) and the public could much more readily have seen that issues were arising, deadlines were slipping, etc.  and made necessary adjustments.

It is a constant problem within organizations that individuals at higher levels make decisions without the proper knowledge base upon which to make such decisions. This can result in unrealistic timelines and even if the timelines are realistic, if unexpected issues arise and there is slippage, there is a temptation to “gloss over” the setbacks and “hope” that the timeline can still be met.

This oftentimes results in extreme pressure on those actually working on the application as they are pressured to produce more, quicker – which, especially in the case of programming – is unwise. The more you pressure programmers the more likely they are to make mistakes, to take shortcuts and the more hours you demand of them the less productive they will become and, again, the number of bugs will grow exponentially.

Bug Fixes

Open Source software is oftentimes very stable and secure because of the number of eyes looking over the code. Further, individuals who are amateurs can make small contributions that allow the programmers to development on system architecture and bigger issues instead of stomping out bugs and making aesthetic improvements.

It would make sense for the Government to take a similar approach to Microsoft, Google, and Yahoo! on this front – each offers cash rewards for the discovery of issues. This is a relatively inexpensive way to get folks to pour in their energies – and individuals receive (for them) a significant compensation (hundreds to thousands of dollars – depending on the issue discovered).

Load Testing

The failure to properly load test the Healthcare.gov site is shocking. An open source project still needs robust methods of load testing performed by the core team – but it also benefits from other individuals and organizations implementing the application and discovering bottlenecks.

An open source, distributed team, also could have easily simulated the significant load that the site experienced upon launch – exposing the load issues early enough for remediation.

Code Reuse

When a project is open source the code can be reused by others for all sorts of purposes. The code to this project would certainly have applications in other government projects as well as the private sector. Reuse of code can significantly streamline development timeframes and even if someone in an entirely uses a portion of code for an entirely different project in a different industry – they will oftentimes contribute their version of the function (with enhancements/bug fixes) back to the original project (resulting in better, more flexible, secure, and robust code).

Cost

I really am just spitballing here – but I have a hard time believing that the development of an open source system to perform the Healthcare.gov functions would have cost anywhere near the costs expended thus far upon this closed source system. I’d guess that $10 million could have completed the project in a more robust and timely manner via open source.

Lesson Learned?

Please, let us take a lesson from this fiasco. We want more affordable healthcare – we can start by not wasting millions developing an application as a closed system which lacks robustness and stability.

I know some areas of the Government are already working with open source (and that is great) – but this needs to be a greater emphasis. Perhaps (I don’t know) there should even be some legislation that makes the (required) standard for new applications be open source and any applications which are desired as closed source systems should require review by a panel to determine if there is actual, substantial reasons for developing in a closed source system.

[Apparently I’m not the only one to think OSS could have made a huge difference. See this article by Christina Farr over at VentureBeat. Not directly related, but still interesting is Dylan Tweney’s article “Healthcare.gov costs show that feds have literally no idea how to build a big web site” also on VentureBeat. Another article comes from NBC News staff writer Gil Aegerter and can be found here.]

[11/4: Good article from Matt Asay entitled, “Sorry, Open Source Isn’t the Panacea for Healthcare.gov” on ReadWrite.]

  1. [1]Though Stallman would argue for free software rather than open source, but I leave that semantics argument, however important it may be, aside for the time being to focus on an area in which a relatively minor change in procedure (moving to open source development) could make a significant change in cost and efficiency.
  2. [2]There are some excellent arguments on how and why open source technology can be more secure than closed source technology. Specifically, the additional security in closed source systems usually isn’t b/c the systems are actually more secure but a function of “security by obscurity” – in other words, security holes exist, no one knows about them (including those who wrote the software). But I digress…

Tech News Summary for May 8th, 2013.

Interview with Open-Mesh.

Introduction

I’ve been using Open-Mesh for several years now, first at Calvary Community Church and more recently at a consulting client’s location. Recently I decided to reach out to Open-Mesh and ask if they’d provide me with an interview and included a number of questions. Michael Burmeister-Brown, President of Open-Mesh, responded to my questions and I have included his answers along with any commentary I might have below.

I’ve also included additional information I gathered from Open-Mesh representatives in recent conversations as I’ve been installing this new mesh network for a client and I’ve included what information I could dig up about Open-Mesh’s corporate background as well…Enjoy!

Interview

Thanks to OpenClipart.org and pgbrandolin for the image.
Thanks to OpenClipart.org and pgbrandolin for the image.

Dave: What happened to the MR500 line of products?

Michael: The MR500 has been discontinued. It was never designed as a successor to the OM2P series, but a second, dual band line. Its successor will come out this summer (2013). The successor will include:

  • Dual Band 2.4/5 ghz.
  • Clients and Mesh will occur on both bands (MR500 was mesh on 5 ghz, clients on 2.4 ghz).
  • Much higher power / receive sensitivity providing greatly improved range / speed. [Dave: From personal experience, the distance was a real issue with MR500 units, a limitation inherent in the 5 ghz spectrum which has a more difficult time penetrating walls and other obstacles.]
  • Each band will support 450 mbps, providing an aggregate output of 900 mbps.
  • The addition of 802.3af POE support, meaning the new units will support standard POE switches. [Dave: Current units in both the MR500 and OM2P lines require injectors and the warranty is voided if units are directly connected to a standard POE switch.]
  • Layer 7 (application) bandwidth control and monitoring. This will allow administrators to control which websites / web applications users can run and how much bandwidth they will be allowed.
  • Active Directory / RADIUS support allowing integrated authentication to company servers.
  • The POE version will have a single gigabit ethernet port while another variation without POE will have five ports.
  • Price point will be $299. While this is more expensive than the MR500 it is still almost $1,000 less than the equivalent Meraki model (MR24).

Dave: What about the future of other products?

Michael: We will be introducing a 5 ghz-only OM5P model identical to the OM2P-HS but operating on 5 ghz. This will allow customers to use all the OM2P-series housing options and build out a 5 ghz or hybrid network consisting of 5 ghz and 2.4 ghz models. Most computers / tables phones in the last couple of years support 5 ghz so this will be an increasingly viable option. It will also be considerably lower in cost than dual-band units while providing more flexibility in installation. The OM5P will have an MSRP of $99.

Dave: The site is pretty simple and there doesn’t seem to be a lot of advertising out about Open-Mesh. Will this change?

Michael: To date we have not done extensive sales or marketing outreach, but I think you can see this is beginning to change by the website and especially the resources page. This summer/fall will see significant increases in this area as new people come onboard.

Dave: How many employees do you have at Open-Mesh?

Michael: I am not sure of the exact count – we are a geographically diverse company with two separate teams in Germany and others in Italy, Canada, China, and of course, the United States.

Dave: For organizations interested in Open-Mesh, how do they know your product will work and that you’ll be around in the future?

Michael: Our sales have doubled each year for the last three years and we have just under 40,000 networks managed on Cloudtrax. Feel free to reach out in a couple months and I’ll be able to share more information on new offerings – especially regarding Cloudtrax.

Open-Mesh Corporate Profile

Open-Mesh is a low profile organization. Unlike many sites that have detailed information about their corporate officers posted on the site, Open-Mesh has none. Go over to CrunchBase and you’ll find a bare-bones company profile. There is no company page on LinkedIn and searching for Open-Mesh employees surfaces only two.

One could take this as a sign that the company is small and unstable, but when it comes to technical companies this is oftentimes the sign that employees are pretty hard-core geeks who spend more time coding and building than they do marketing themselves. It seems to be the latter in the case of Open-Mesh.

Luckily, finding information on Open-Mesh President Michael Burmeister-Brown, who provided the above interview, is a little easier than finding information on employees generally – and Burmeister-Brown’s background is nothing to laugh at.

Bloomberg’s BusinessWeek tells us that Michael founded Central Point Software in 1981 where he served as President, Chief Executive Officer (CEO), and Chief Technology Officer (CTO) until 1991. Central Point would be acquired by Symantec in 1994 for $60 million.

In 1992 Michael founded another company – Second Nature Software  – and began serving as its president. This company had an environmental focus and committed all its profits to The Nature Conservancy – over $2.5 million. It appears to have closed its doors as of 2012.

Michael founded another company, NetControls.com in the mid-1990’s and in 1997 this company was acquired by Yahoo!. Michael continued at Yahoo for five years working on Yahoo’s News Ticker and Yahoo Messenger products.

He has also served as a Director of WebTrends since October 1996. I am unsure whether this position is ongoing – Bloomberg doesn’t clarify.[1]

Michael became a co-founder of NetEquality seeking to ensure that internet access was available for everyone – especially low-income communities. Originally, NetEquality was associated with Meraki, but when Meraki boosted their prices and abandoned the low-cost market, Michael decided to step in and found Open-Mesh.

Want a face to put to that name? Check out Oregon Live’s article here and scroll down the page halfway.

Other News

According to conversations with Open-Mesh representatives I’ve had over the last several weeks, here are a few other tidbits I’ve gathered:

  • There is a significant firmware upgrade on the way for Open-Mesh devices this Spring [unfortunately, I’ve forgotten the details of what is included…but it was pretty exciting.].
  • Another upgrade will occur in the Fall/Winter of 2013 which will include one of Meru’s best features – automatic load balancing across available APs.

Send Me Your News

If you have additional info. or updates on Open-Mesh or CloudTrax, I’d love to hear them and I’ll try to add them to the current article as appropriate.

Support Open Mesh

I’m impressed by what Open Mesh is seeking to accomplish, it seems like a company with an honorable and worthy mission. I’d encourage you to join me in supporting them.

  1. [1]This information on Burmeister-Brown consists of info. gathered from Bloomberg but also from NetEquality, Wikipedia, and Second Nature Software’s site.

A Social Search Engine Proposal.

Overview:

Nutch robots
Image via Wikipedia

IMHO, the current state of search is depressing. This is not a new realization for me. It is seven or eight years ago now that I first imagined a social search engine which would not rely solely on algorithms to determine the relative importance of search results but that would consider both machine and end-user feedback. This was in the early days of Nutch and I began researching the possibility of utilizing Nutch as the underlying core engine for such an endeavor, I rounded up some small-term investment capital, and so on. Unfortunately, this was also at the high peak of my struggle with Obsessive Compulsive Disorder (OCD) and my efforts eventually fell through.

Over the years I have watched as promising engine after promising engine has come along and in their turn failed to take the lead or even maintain their momentum. Years have passed and at each step of the way I have said, “It must just be around the corner…This is ages in technology time.” Even Google came out with SearchWiki, while not a perfect implementation it was a huge step in the right direction. For the last year or two I’ve been using Zakta and I’ve spent time on almost every other social search engine currently (or previously) available – yet I find that in the long-run they have all failed me.

So here I am so many years later longing for just such an engine. I’ve written on this blog about the topic before, but I will write again. In this post I will specifically propose the formation of an endeavor to create a social search engine, and I hope it will foster some interest in the community. I am not ready nor able to undertake such an endeavor myself – but I am interested in being part of such an endeavor.

Open Source: Ensuring Continuity

It is worth noting at this juncture that I’d intend for this project to be open source. Too many times I have lost the social search data I have accumulated because a specific engine has folded. My hope would be that the resultant project would be open source with commercial implementations and would provide a significant amount of data portability between engines, in case one engine should fold. We’ll talk more about the open source and portability aspects of the project later in this proposal.

What is Social Search?

Before we jump into a discussion about how to build a social search engine it is necessary first to define what is meant by social search. Unfortunately the term social search is used to delineate several different concepts which are very different from one another.

There are the real-time search engines which focus on aggregating information from various social media networks – and sometimes prioritizing links based on their popularity within a network. For example (also defunct) Topsy, the no-longer-real-time OneRiot, and the now-defunct Scoopler.

There are the engines which are focused on finding humans – e.g. allowing one to garner information about a person. Wink eventually became this sort of engine, Spokeo would be another example. They are essentially white pages on steroids.

Finally there is what I mean by social search – and I would use another term but there is no other term I am aware of which is so widely used to delineate this type of engine (and I want to ensure the widest possible audience). It is sometimes called a “human-powered search engine”[1] Google and Wikimedia may have come closest by terming it a “Wiki” (SearchWiki and Wikia), but it seems to me that there is a need for an entirely new term that better and more precisely defines the idea…perhaps one result of this proposal and its aftermath will be just such a term.[2]

Core Parameters

In this section I will delineate what I believe are the core required features for a social search engine. An engine which included these features I believe would be a 1.0 release. There is certainly room for numerous improvements, but this would define a baseline by which to measure the proposal’s progress. I am not infallible, and I am sure there are aspects of the baseline which should be edited, removed, or replaced – I am open to suggestions.

  • Web Crawler – The engine must include a robust web crawler which can index the web, not just a subset of sites (e.g. Nutch).
  • Interpretive Ability – The engine must be able to interpret a wide variety of file formats, minimizing the invisible web (e.g. Tika).
  • Engine – The engine must be able to quickly query the aggregated web index and return results in an efficient manner (e.g. Nutch).
  • Search Interface – The engine must include a powerful search interface for quickly and accurately returning relevant results (e.g. Solr).
  • Scalability – The engine must be scalable to sustain worldwide utilization (e.g. Hadoop).
  • Algorithms – In addition to the standard automated algorithms for page relevance the system must integrate human-based feedback including:
    • Positive and negative votes on a per page basis.
    • The ability to add and remove pages from query results.
    • Influence of votes based on a calculation of user trustworthiness (merit).
    • Promotion of results by administrative users.
  • Custom Results – The results must be customized for the user. While the aggregate influence of users affects the results, the individual user is also able to customize results. One should see a search page which reflects the results one has chosen and not the results one has removed.
    • Ability to annotate individual entries.
  • Portability – The engine should define a standard format for user data which can be exported and imported between engines. This should include customized query results, annotations, votes, removed and added pages, etc. This will be available to the user for export/import at any time. While additional data may be maintained by individual engines, the basic customizations should be portable.

I’m sure I’m missing some essentials – please share with me whatever essentials I have forgotten that come to your mind.

Starting from Zero?

It is not necessary for this project to begin from nothing, significant portions of the endeavor have already been undertaken toward creating an open source search engine – largely by Apache’s Nutch project. The available code should be utilized and with customization could integrate social search features. This would allow some of the most significant aspects of the project to be offloaded to already existing projects.

Additionally, it might be hoped that companies and individuals who have previously created endeavors in this direction would open source their code. For example, Wikia was built on Nutch and the code – including the distributed crawler (GRUB) and UI (Wikia) was released into the open source world.[3]

What We Need

Now the question becomes, “What do we need?” and more importantly, “Whom do we need?”

First off, we could use donated hosting. Perhaps one of the larger cloud-based hosting companies would consider offering us space for a period of time? I’m thinking here of someone like Rackspace, Amazon Web Services, or GoGrid.

Secondly, we’d need developers. I’m not a Java developer…though I’ve downloaded the code and am preparing to jump in. I also don’t have a ton of time – so depending on me to get the development done…well, it could take a while.

Thirdly, we’d need content curators…and I think this is key (and also one of the areas I love the most). We’d need people to edit the content and make the results awesome. These individuals would be “power users” whose influence on results would be more significant than the new user. With time individuals could increase their reputation, but this would seed us with a trusted core of individuals[4] who would ensure that the results returned would be high quality right from the get-go for new users[5]

Finally, we’d need some designers. I’m all for simplicity in search – but goodness knows most of us developers have very limited design abilities and an aesthetic touch here and there would be a huge boon to the endeavor.

Next Steps

At this juncture its all about gathering interest. Finding projects that have already begun the process, looking for old hidden open source code that may be of use, etc. Leave a comment if you’d like to be part of the discussion.

Appendixes

Current Open Source Search Engines

  • DataparkSearch – GNU GPL, diverged from mnGoSearch in 2003, coded in C and CGI.
  • Egothor – Open source, written in Java, currently under a complete from scratch rewrite for version 3.
  • Grubng – Open source, distributed crawler..
  • BeeSeek – Open source, P2P, focuses on user anonymity.
  • Yioop! (SearchQuarry) – GNU GPLv3, documentation is very informative.
  • Heritrix – Open source, by Archive.org for their web archives.
  • Seeks Project – AGPLv3, P2P, fairly impressive project which attempts to take social search into consideration.
  • OpenWebSpider – Open source, written in .NET, appears to be abandoned.
  • Ex-Crawler – Open source, Java, impressive, last updated released 2010.
  • Jumper Search – Open source, social search, website appears to be down, currently linking to SF.
  • Open Search Server – Open source.
  1. [1]Or a human search engine, which becomes sadly entangled with engines meant for finding humans such as referenced previously.
  2. [2]A few other terms which might be appropriate are collaborative search engine, though this would have to be prefaced with “active” to distinguish it from passive user feedback aggregation (e.g. how long a user stayed at a site); curation search engine (giving the idea of content curation, but this is sometimes thought of in terms of archival); or crowd-sourced search engine (though this centers too much on democracy, whereas such engines would probably benefit from a meritocracy).
  3. [3]Unfortunately, I have been unable to find a copy of the Wikia UI code.
  4. [4]Taking a page from early Ask Jeeves history.
  5. [5]Obviously not necessarily in the long tail, but in the general topics.

10 Beautiful Owls (Creative Commons – Attribution License)

Here are a collection of beautiful images of owls licensed under the Creative Commons – Attribution License.
Owl
(by
Gabor Kovacs)

owl 1
(by w.marsh)

Owl
(by Joe [CmdrGravy])

owl in my palm tree
(by Brisbane Falling)

Owl
(by Rhys’s Piece Is)

another owl
(by Brisbane Falling)

Burrowing Owl
(by Squeezyboy)

Snowy Owl
(by Neil McIntosh [Harlequeen])

Screech owl pair
(by Norasilk)

Eagle Owl
(by webheathcloseup)

KeePass – Free Software for Keeping Your Critical Information Safe.

KeePass is a free and open source password manager that easily outstrips the commercial alternatives I have encountered. I’ve been using KeePass for several years now and can’t complain one bit.

What For?

The KeePass Password Safe icon.
The KeePass Password Safe icon. (Photo credit: Wikipedia)

You shouldn’t use the same password for every site, in fact, ideally you shouldn’t be using the same password on any two sites or services you access. So, your login to your bank should not be the same as to your email or to login to your computer – and so on. If they are the same you run the risk of one site being compromised and hackers being able to gain access to a large number of your sites. When you begin to carry this best practice out in real life you find yourself with a tremendous number of passwords – and KeePass helps you securely store and manage account information.

Don’t be deceived – KeePass can keep a lot more than just username/password combinations. If you wanted to you could use KeePass to keep a private journal…

Features:

  • Utilizes advanced security methods to protect your data – for examples AES-256 and SHA-256.
  • Import/Export Features to/from many formats.
  • Allows for the creation of groups for organizing passwords.
  • Integrates with web browsers, etc. to automatically input information.
  • Robust search that allows you to quickly find records based on any word in the record.
  • Has a plugin architecture that allows for extendibility.
  • Runs on a wide variety of Operating Systems.
  • Shows you how strong your passwords are as you type them.
  • Can generate random passwords for you.

Conclusion:

I suppose I could go on…but enough said. Its a great little application – there is no reason not to use it. Go get it now. There is no excuse not to be keeping track of your accounts and no excuse for this information to be unencrypted.

7-Zip – A Compression and Decompression Application.

Today the need for compression and decompression applications is not nearly as widespread as it once was at the consumer level. This is because we have increased our ability to store information – moving from floppy disks to DVD’s and flash drives – and moving from dial-up internet to high speed. In the past we tried to squish files down to the smallest possible size to make them fit on smaller media and transfer faster. Now, we don’t worry about that nearly as much.

7-zip
7-zip (Photo credit: PiPiWa)

Still, there are a large number of files that come in compressed formats. It is still a convenient way to send a whole bunch of files at once or to protect files with an encryption key. There are several commercial products available for this purpose including the venerable WinZip. That said, when possible I seek to find freeware or open source alternatives to commercial software packages – a habit that comes from growing up without (much) cash.

My personal favorite is 7-Zip. Its free and open source. My only complaint is that the user interface is not nearly intuitive or friendly enough. Still, if you are willing to take the time to learn the application – it is extremely powerful and can handle a wide variety of compression formats – way beyond just your normal zip/unzip.

My Platform as a Service (PaaS) List.

Some people are going to be up in arms over this list – because it isn’t truly a PaaS (platform as a service) list. I’m sure some noticeable entries are missing and some non-noticeable entries are present. The order is random. I’ve just been evaluating PaaS solutions and figured I’d post most of what I’ve found thus far. I had a hard time finding any good lists – so perhaps this will ease someone else’s research. I’d love to hear what PaaS solutions I am missing!

  • WaveMaker – Build rich internet applications (RIA) using a WYSIWYG interface. Community edition is open source. Creates Java applications. Wikipedia Article.
  • Visual Web GUI – Build RIA’s using visual development interface. There is a free/open source express edition with regular pricing beginning slightly under $350 for a license. Creates .NET applications. Can deploy to Windows Azure.
  • SalesForce – The Force platform is the defacto standard PaaS. Significant free offering included with up to 100 users, etc. Also, free licenses for non-profits with majority price discount on additional licenses.
  • nuBuilder – A open source project that allows for rapid development of web database applications. Wikipedia Article.
  • BungeeConnect – Uses an Eclipse-based IDE.
  • Web Fuser (Inuvia Technologies) – IDE and hosting. Hosting starts at $20/mo.
  • WinDev – Free lite IDE for rapid development of JAVA/.NET applications. Wikipedia Article.
  • Wolf Frameworks – Has a free starter plan (2 users, 100 MB storage, unlimited apps/entries). Wikipedia Article.
  • LongJump – Pricing starts at $30/user/mo.
  • WorkXpress – No pricing, thirty-day free trial. Does offer the ability to host with them, a third party, or your own. Claims to require no programming.
  • SpringBase – Fairly impressive free account for those looking to create a small database application. Appears they no longer offer a free trial. Pricing starts at $99/year.
  • TrackVia – Pricing is expensive ($249/mo., starts at $99/mo.). Online database platform
  • DBstract – Offers free accounts and low-cost premium accounts ($20/mo.) for creating database applications/hosting.
  • Caspio – Starts at $40/mo. Claims to require no programming. Wikipedia Article.
  • Zoho – Free account for up to two users, $5/ea./mo. additional users.
  • Hyrdo4GE – Still in closed beta.
  • HyperBase – Part of HyperOffice. (thanks: Jean Churchill).
  • MyTaskHelper – UI is pretty basic, but it is free.

Bible.org – For studying and living Christianity.

Stained glass at St John the Baptist's Anglica...
Image via Wikipedia

For those who are Christians or who are interested in understanding Christianity, there are few sites on the internet more valuable in learning and growing than the Biblical Studies Foundation (bible.org). While the resources available to understand and practice Christianity are extensive generally, the availability and freedom with which the BSF makes its resources available is practically unparalleled. I have used the BSF site for years and continue to utilize it regularly and rave about its magnificent capabilities.

Let’s take a look at a few of the BSF’s many features:

  • New English Translation (NET) – A brand new translation of the Old and New Testaments from the original manuscripts. The NET is readable and yet precise, but what really makes the translation stand apart is the 70k+/- footnotes that are throughout the text. These footnotes are not commentary on the text but rather explain the translators decisions, especially on controversial verses. They offer deep insight into the original texts and are an amazing aid to the bible student or translator.
  • Book Commentaries – Contemporary commentaries written by sincere bible students/scholars are freely available on the BSF website. While there is great value in the two thousand years of commentary we have on Scripture, these commentaries offer an additional perspective including the latest manuscript and archaeological evidence, contemporary illustrations and applications, and so on while maintaining fidelity to the Scriptures.
  • The Theology Program (TTP) – An extensive theological training program meant for churches to utilize in training lay individuals in theology. The course is in-depth, practical, and understandable. Its meant to help those who want to push on in their theological understanding but cannot afford the expense or time commitments of a college education at this juncture in their lives.

These are only a few of so many wonderful things you will find at the BSF. I insist, you must visit!