Author: Franz Malten Buemann

  • Healthcare: Shifting Customer Expectations Require A Focus On Customer Experience

    Covid-19 and healthcare experts working remotely gave healthcare CX systems an epic stress test, or a huge influx of questions from consumers needing immediate and critical healthcare information. The results showed today’s healthcare CX systems, generally comprised of human agents in contact centers and government agencies and websites, are based on a pre-Covid-19 paradigm that is not prepared for a post-pandemic reality. Healthcare organizations planning a path forward will continue to face significant challenges such as managing rising care costs, adopting new models of healthcare delivery driven by consolidation, a shrinking medical provider workforce, and more. Those organizations that manage and deliver unparalleled customer experiences have a strategic advantage in meeting those challenges and winning over customers in this rapidly evolving industry. Full article: https://www.forbes.com/sites/forbesbusinesscouncil/2021/06/01/healthcare-shifting-customer-expectations-require-a-focus-on-customer-experience/?sh=6182d4dc1913
    submitted by /u/vesuvitas [link] [comments]

  • UserZoom launches the QXscore to holistically measure customer experience

    The CXM team took interest in the news of UserZoom launching QXscore, a tool meant to measure the customer experience of digital services and analyse behavioural data. What sparked our interest is the apparent ability of QXscore to gather user experience data and communicate efficiently the results with diverse stakeholders. That would allow brands to…
    The post UserZoom launches the QXscore to holistically measure customer experience appeared first on Customer Experience Magazine.

  • Robots.txt: The Deceptively Important File All Websites Need

    The robots.txt file helps major search engines understand where they’re allowed to go on your website.
    But, while the major search engines do support the robots.txt file, they may not all adhere to the rules the same way.
    Below, let’s break down what a robots.txt file is, and how you can use it.

    What is a robots.txt file?
    Every day, there are visits to your website from bots — also known as robots or spiders. Search engines like Google, Yahoo, and Bing send these bots to your site so your content can be crawled and indexed and appear in search results.
    Bots are a good thing, but there are some cases where you don’t want the bot running around your website crawling and indexing everything. That’s where the robots.txt file comes in.
    By adding certain directives to a robots.txt file, you’re directing the bots to crawl only the pages you want crawled.
    However, it’s important to understand that not every bot will adhere to the rules you write in your robots.txt file. Google, for instance, won’t listen to any directives that you place in the file about crawling frequency.
    Do you need a robots.txt file?
    No, a robots.txt file is not required for a website.
    If a bot comes to your website and it doesn’t have one, it will just crawl your website and index pages as it normally would.
    A robot.txt file is only needed if you want to have more control over what is being crawled.
    Some benefits to having one include:

    Help manage server overloads
    Prevent crawl waste by bots that are visiting pages you do not want them to
    Keep certain folders or subdomains private

    Can a robots.txt file prevent indexing of content?
    No, you cannot stop content from being indexed and shown in search results with a robots.txt file.
    Not all robots will follow the instructions the same way, so some may index the content you set to not be crawled or indexed.
    In addition, If the content you are trying to prevent from showing in the search results has external links to it, that will also cause the search engines to index it.
    The only way to ensure your content is not indexed is to add a noindex meta tag to the page. This line of code looks like this and will go in the html of your page.
    <meta name=”robots” content=”noindex”>
    It’s important to note that if you want the search engines to not index a page, you will need to allow the page to be crawled in robots.txt.
    Where is the robots.txt file located?
    The robots.txt file will always sit at the root domain of a website. As an example, our own file can be found at https://www.hubspot.com/robots.txt.
    In most websites you should be able to access the actual file so you can edit it in an FTP or by accessing the File Manager in your hosts CPanel.
    In some CMS platforms you can find the file right in your administrative area. HubSpot, for instance, makes it easy to customize your robots.txt file from your account.
    If you are on WordPress, the robots.txt file can be accessed in the public_html folder of your website.

    WordPress does include a robots.txt file by default with a new installation that will include the following:
    User-agent: *
    Disallow: /wp-admin/
    Disallow: /wp-includes/
    The above is telling all bots to crawl all parts of the website except anything under the /wp-admin/ or /wp-includes/ directories.
    But you may want to create a more robust file. Let’s show you how, below.
    Uses for a Robots.txt File
    There could be many reasons you want to customize your robots.txt file — from controlling crawl budget, to blocking sections of a website from being crawled and indexed. Let’s explore a few reasons for using a robots.txt file now.
    1. Block All Crawlers
    Blocking all crawlers from accessing your site is not something you would want to do on an active website, but is a great option for a development website. When you block the crawlers it will help prevent your pages from being shown on search engines, which is good if your pages aren’t ready for viewing yet.
    2. Disallow Certain Pages From Being Crawled
    One of the most common and useful ways to use your robots.txt file is to limit search engine bot access to parts of your website. This can help maximize your crawl budget and prevent unwanted pages from winding up in the search results.

    It is important to note that just because you have told a bot to not crawl a page, that doesn’t mean it will not get indexed. If you don’t want a page to show up in the search results, you need to add a noindex meta tag to the page.

    Sample Robots.txt File Directives
    The robots.txt file is made up of blocks of lines of directives. Each directive will begin with a user-agent, and then the rules for that user-agent will be placed below it.
    When a specific search engine lands on your website, it will look for the user-agent that applies to them and read the block that refers to them.
    There are several directives you can use in your file. Let’s break those down, now.
    1. User-Agent
    The user-agent command allows you to target certain bots or spiders to direct. For instance, if you only want to target Bing or Google, this is the directive you’d use.
    While there are hundreds of user-agents, below are examples of some of the most common user-agent options.
    User-agent: Googlebot
    User-agent: Googlebot-Image
    User-agent: Googlebot-Mobile
    User-agent: Googlebot-News
    User-agent: Bingbot
    User-agent: Baiduspider
    User-agent: msnbot
    User-agent: slurp     (Yahoo)
    User-agent: yandex
    It’s important to note — user-agents are case-sensitive, so be sure to enter them properly.
    Wildcard User-agent
    The wildcard user-agent is noted with an (*) asterisk and lets you easily apply a directive to all user-agents that exist. So if you want a specific rule to apply to every bot, you can use this user-agent.
    User-agent: *
    User-agents will only follow the rules that most closely apply to them.
    2. Disallow
    The disallow directive tells search engines to not crawl or access certain pages or directories on a website.
    Below are several examples of how you might use the disallow directive.
    Block Access to a Specific Folder
    In this example we are telling all bots to not crawl anything in the /portfolio directory on our website.
    User-agent: *
    Disallow: /portfolio
    If we only want Bing to not crawl that directory, we would add it like this, instead:
    User-agent: Bingbot
    Disallow: /portfolio
    Block PDF or Other File Types
    If you don’t want your PDF or other file types crawled, then the below directive should help. We are telling all bots that we do not want any PDF files crawled. The $ at the end is telling the search engine that it is the end of the URL.
    So if I have a pdf file at mywebsite.com/site/myimportantinfo.pdf, the search engines won’t access it.
    User-agent: *
    Disallow: *.pdf$
    For PowerPoint files, you could use:
    User-agent: *
    Disallow: *.ppt$
    A better option might be to create a folder for your PDF or other files and then disallow the crawlers to crawl it and noindex the whole directory with a meta tag.
    Block Access to the Whole Website
    Particularly useful if you have a development website or test folders, this directive is telling all bots to not crawl your site at all. It’s important to remember to remove this when you set your site live, or you will have indexation issues.
    User-agent: *
    The * (asterisk) you see above is what we call a “wildcard” expression. When we use an asterisk, we are implying that the rules below should apply to all user-agents.
    3. Allow
    The allow directive can help you specify certain pages or directories that you do want bots to access and crawl. This can be an override rule to the disallow option, seen above.
    In the example below we are telling Googlebot that we do not want the portfolio directory crawled, but we do want one specific portfolio item to be accessed and crawled:
    User-agent: Googlebot
    Disallow: /portfolio
    Allow: /portfolio/crawlableportfolio
    4. Sitemap
    Including the location of your sitemap in your file can make it easier for search engine crawlers to crawl your sitemap.
    If you submit your sitemaps directly to each search engine’s webmaster tools, then it is not necessary to add it to your robots.txt file.
    sitemap: https://yourwebsite.com/sitemap.xml
    5. Crawl Delay
    Crawl delay can tell a bot to slow down when crawling your website so your server does not become overwhelmed. The directive example below is asking Yandex to wait 10 seconds after each crawl action it takes on the website.
    User-agent: yandex  
    Crawl-delay: 10
    This is a directive you should be careful with. On a very large website it can greatly minimize the number of URLs crawled each day, which would be counterproductive. This can be useful on smaller websites, however, where the bots are visiting a bit too much.
    Note: Crawl-delay is not supported by Google or Baidu. If you want to ask their crawlers to slow their crawling of your website, you will need to do it through their tools.
    What are regular expressions and wildcards?
    Pattern matching is a more advanced way of controlling the way a bot crawls your website with the use of characters.
    There are two expressions that are common and are used by both Bing and Google. These directives can be especially useful on ecommerce websites.
    Asterisk: * is treated as a wildcard and can represent any sequence of characters
    Dollar sign: $ is used to designate the end of a URL
    A good example of using the * wildcard is in the scenario where you want to prevent the search engines from crawling pages that might have a question mark in them. The below code is telling all bots to disregard crawling any URLs that have a question mark in them.
    User-agent: *
    Disallow: /*?
    How to Create or Edit a Robots.txt File
    If you do not have an existing robots.txt file on your server, you can easily add one with the steps below.

    Open your preferred text editor to start a new document. Common editors that may exist on your computer are Notepad, TextEdit or Microsoft Word.
    Add the directives you would like to include to the document.
    Save the file with the name of “robots.txt”
    Test your file as shown in the next section
    Upload your .txt file to your server with a FTP or in your CPanel. How you upload it will depend on the type of website you have.

    In WordPress you can use plugins like Yoast, All In One SEO, Rank Math to generate and edit your file.
    You can also use a robots.txt generator tool to help you prepare one which might help minimize errors.
    How to Test a Robots.txt File
    Before you go live with the robots.txt file code you created, you will want to run it through a tester to ensure it’s valid. This will help prevent issues with incorrect directives that may have been added.
    The robots.txt testing tool is only available on the old version of Google Search Console. If your website is not connected to Google Search Console, you will need to do that first.
    Visit the Google Support page then click the “open robots.txt tester” button. Select the property you would like to test for and then you will be taken to a screen, like the one below.
    To test your new robots.txt code, just delete what is currently in the box and replace with your new code and click “Test”. If the response to your test is “allowed”, then your code is valid and you can revise your actual file with your new code.

    Hopefully this post has made you feel less scared of digging into your robots.txt file — because doing so is one way to improve your rankings and boost your SEO efforts.

  • 10 New Salesforce Flow Features to Shout About in Summer ’21

    Every time the Salesforce release notes are published, I get straight to reading them because I can guarantee there will be updates to Salesforce Flows. Summer ’21 has introduced new improvements to Salesforce Flow – and plenty of them to shout about! Here’s my summary… Read More

  • Salesforce Native Document Generation [In-Depth Review]

    S-Docs is a document generation tool that allows users to create, manage and generate templates in multiple formats to meet all their document generation needs. S-Docs prides themselves on being the only document generation tool that is 100% native to Salesforce. The more data you… Read More

  • 6 Little-Known Facts About Modern Call-Back Technology

    Call queues and hold times are a traditional part of the call center experience. But tradition doesn’t always stand the test of time ─ as modern customer expectations evolve, businesses must change their service approaches accordingly.
    That’s where call-back technology comes in. This simple and powerful tool has been gaining momentum in the customer service world for its ability to lower call volumes, improve key call center metrics, and boost customer satisfaction.
    Are You Losing Customers to Hold Time?
    If you aren’t familiar with call-back technology and its benefits, we’ve put together a quick list so you can explore the benefits of this popular tool.
    1. Call-backs work with any call center platform.
    If you’re in the market for call-back technology, cloud-based call-backs are your best bet. It works with literally any call center platform, so you won’t need to worry about changing existing infrastructure.
    Some services offer call-backs as an add-on feature to their platforms, like Avaya or Genesys. These are great if you already have the platform set up, but depending on which product you use, you may not see the same level of functionality as a dedicated product.
    2. Call-back technology is totally secure.
    Security is an important part of a contact center’s operations, especially if you deal with sensitive information such as patient medical history or financial records. For this reason, businesses may hesitate to use call-back technology, especially tools that are cloud-based.
    Luckily, this isn’t such a problem anymore. By using appliances and on-site hardware, you can ensure all confidential data stays on premises and your operation meets its compliance requirements.
    Contact Centers Are Using More Call-Backs Than Ever
    3. Call-backs are fully customizable to your brand.
    Call-backs are simple to use in practice: your customer reaches out and hears a call-back offer message. Then they can choose to press ‘1’ to receive a call-back when an agent is available, so they don’t have to wait on the line.
    Beyond that, you can customize everything from your offer message to when you choose to offer customers a call-back and everything in between. You can also offer customers a call-back on your website or mobile app so that it’s fully aligned to your business’ brand. More on this in the next section!
    4. Call-backs can be used tactically for maximum impact.
    The only thing better than a contact center that offers call-backs is a contact center that’s optimized its processes to create a simple and frictionless customer experience! This is where call-back strategies come in.
    Many contact centers use an omnichannel strategy, where they offer their customers call-backs on their website or app in addition to their voice channel using Visual IVR and Conversation Scheduling. This creates an easy way for customers to request a chat with your agents without overwhelming your phone lines.
    The Actual Difference Between Virtual Queuing and Call-Backs
    5. Call-backs improve key call center metrics.
    ‘Immediate ROI’ isn’t something you hear every day. Yet many call centers report seeing just that with call-back technology. This is especially true for the following contact center KPIs:
    Abandonment rate.
    A high abandonment rate occurs when your customers end their call before reaching an agent, indicating frustration and service dissatisfaction. Call-backs have an immediate impact on this – by offering a call-back, the customer can opt out of the call queue instead and receive a call from an agent later.
    Customer Satisfaction (CSat) score.
    This is simple: customers universally hate waiting on hold. Eliminate the need for hold time, and your customers will be happier and less frustrated, therefore increasing your CSat score. You’ll be astounded at just how much wait times impact your customer satisfaction levels!
    First Call Resolution (FCR).
    You’ll be hard pressed to find a customer who prefers having their service interaction split into multiple calls. That’s why FCR is such an important metric for call centers. By using call-back technology strategically, you can increase your agents’ chances of solving an issue on the first go. This is even more impactful when combined with a Visual IVR to collect information more accurately.
    How to Overcome Challenges with Your Call Center Metrics
    6. Call-backs reduce agent overwhelm during high call volumes.
    Your agents are the unsung heroes of the call center as they handle all one-on-one customer engagements. This also means that they’ll be the hardest hit during times of high call volumes, making them more susceptible to burnout and increasing their likelihood of making mistakes.
    Call-backs smooth and flatten out call spikes. This way, your agents aren’t worrying about the looming call queues and frustrated customers waiting for them. This will also help reduce agent turnover and retention.The post Blog first appeared on Fonolo.

  • Salesforce Data Pipelines: Consolidate and Clean Data from Multiple Systems, Within Salesforce

    Something new has landed! Something that is making it so much easier for Salesforce Admins to clean, transform and merge Salesforce data, all within the native Salesforce platform. It’s a brand new product – Salesforce Data Pipelines, which leverages the power of Tableau CRM Data… Read More

  • Monarchists

    For as long as there’s been recorded history, kings and queens have ruled and been celebrated by their subjects. Not everywhere, not all the time, but widely.

    Not simply the royalty of nations, but of organizations as well.

    It’s worth noting that in addition to monarchs, there are monarchists, citizens and employees and followers who prefer the certainty that comes from someone else.

    Royalty offers something to some of those who are ruled. If it didn’t, it wouldn’t exist.

    As Sahlins and Graeber outline in their extraordinary (and dense) book on Kings, there’s often a pattern in the nature of monarchs. Royalty doesn’t have to play by the same cultural rules, and often ‘comes from away.’ Having someone from a different place and background allows the population to let themselves off the hook when it comes to creating the future.

    If your participation in leadership is not required, then you’re free to simply be a spectator.

    When we industrialized the world over the last century, we defaulted to this structure. Many Western industrial organizations began as founder-celebrated and founder-driven. CEOs could, apparently, do no wrong. Until the world their business operated in changed.

    In large corporations, the autocratic, well-paid chieftain has the trappings of a monarch. A private air force, minions and the automatic benefit of the doubt. Working in this setting requires obedience and effort from employees more than agency or independence.

    A well-functioning constitutional monarchy is surprisingly effective. That’s not the problem. The problem is what happens when it stops to function well. The problem can happen when royalty becomes selfish, shortsighted or impatient. Or the problem could be a pattern of employees or members or citizens failing to participate. Resilience disappears and the system becomes brittle.

    When the world changes, and it does, faster than ever, it’s community and connection that moves us forward.

    Modern organizations are discovering that all of us know more than any of us, and that engaged individuals ready to not only speak up but to eagerly take responsibility for the work they do is an effective, resilient and equitable way to show up in the world.

  • Why Use Landing Pages

    A good advise is: Just do it! Build a separate landing page for each target of your campaign. Because otherwise it’s like, “The campaign actually works, but it doesn’t convert well on the website.” Example: the campaign successfully drives traffic to the regular website and then it doesn’t deliver what was promised or is too complicated or too inflexible. Websites often have too much different content and directions, the customers are confused, do not feel picked up – and bounce. Right? What do you think?
    submitted by /u/paulemannski [link] [comments]

  • Understanding the Marketing Mix Concept – The Four Ps

    What makes a successful marketer?  One who succeeds in determining the right product and positioning, setting the right price, and using the right method for promoting it. Unfortunately, these achievements are not an easy feat and require a lot of research and effort to make work. If you aren’t able to achieve even one of…
    The post Understanding the Marketing Mix Concept – The Four Ps appeared first on Benchmark Email.